* * ==> Audit <== * |---------|--------------------------------|-----------------|---------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|--------------------------------|-----------------|---------|---------|---------------------|---------------------| | start | cluster-unfixed start | cluster-unfixed | rogermm | v1.28.0 | 05 Jan 23 11:06 PST | 05 Jan 23 11:08 PST | | | --dns-domain cluster.xpt | | | | | | | addons | cluster-unfixed addons enable | cluster-unfixed | rogermm | v1.28.0 | 05 Jan 23 11:08 PST | 05 Jan 23 11:09 PST | | | registry | | | | | | |---------|--------------------------------|-----------------|---------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2023/01/05 11:06:26 Running on machine: iMac-Pro Binary: Built with gc go1.19.3 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0105 11:06:26.752771 30321 out.go:296] Setting OutFile to fd 1 ... I0105 11:06:26.753048 30321 out.go:348] isatty.IsTerminal(1) = true I0105 11:06:26.753052 30321 out.go:309] Setting ErrFile to fd 2... I0105 11:06:26.753057 30321 out.go:348] isatty.IsTerminal(2) = true I0105 11:06:26.753194 30321 root.go:334] Updating PATH: /Volumes/data/.minikube/bin W0105 11:06:26.753371 30321 root.go:311] Error reading config file at /Volumes/data/.minikube/config/config.json: open /Volumes/data/.minikube/config/config.json: no such file or directory I0105 11:06:26.756107 30321 out.go:303] Setting JSON to false I0105 11:06:26.795353 30321 start.go:116] hostinfo: {"hostname":"iMac-Pro.local","uptime":16153,"bootTime":1672929433,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.1","kernelVersion":"22.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f63ee5c7-43fb-5ad5-b0a2-3daecb8534cd"} W0105 11:06:26.795471 30321 start.go:124] gopshost.Virtualization returned error: not implemented yet I0105 11:06:26.798497 30321 out.go:177] ๐Ÿ˜„ [cluster-unfixed] minikube v1.28.0 on Darwin 13.1 I0105 11:06:26.813729 30321 notify.go:220] Checking for updates... I0105 11:06:26.813758 30321 out.go:177] โ–ช MINIKUBE_HOME=/Volumes/data/.minikube W0105 11:06:26.813817 30321 preload.go:295] Failed to list preload files: open /Volumes/data/.minikube/cache/preloaded-tarball: no such file or directory I0105 11:06:26.826536 30321 out.go:177] โ–ช MINIKUBE_BACKUP=/Volumes/backup/minikube-backup I0105 11:06:26.832566 30321 driver.go:365] Setting default libvirt URI to qemu:///system I0105 11:06:26.832612 30321 global.go:111] Querying for installed drivers using PATH=/Volumes/data/.minikube/bin:/usr/local/opt/go/libexec/bin:/Users/rogermm/go/bin:/Users/rogermm/.asdf/shims:/usr/local/opt/asdf/libexec/bin:/Users/rogermm/.krew/bin:/Users/rogermm/.pyenv/shims:/Users/rogermm/.jenv/shims:/Users/rogermm/.jenv/bin:/Users/rogermm/.krew/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/VMware Fusion.app/Contents/Public:/Library/Apple/usr/bin:/usr/local/bin:/Users/rogermm/bin:/Users/rogermm/go/bin:/opt/s3cmd W0105 11:06:26.911821 30321 docker.go:113] docker version returned error: exit status 1 I0105 11:06:26.911885 30321 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Reason:PROVIDER_DOCKER_NOT_RUNNING Fix:Start the Docker service Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/ Version:} I0105 11:06:26.912071 30321 global.go:119] qemu2 default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "qemu-system-x86_64": executable file not found in $PATH Reason: Fix:Install qemu-system Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/qemu/ Version:} I0105 11:06:26.934179 30321 virtualbox.go:136] virtual box version: 7.0.4r154605 I0105 11:06:26.934238 30321 global.go:119] virtualbox default: true priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:7.0.4r154605 } I0105 11:06:26.934275 30321 global.go:119] vmwarefusion default: false priority: 1, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:the 'vmwarefusion' driver is no longer available Reason: Fix:Switch to the newer 'vmware' driver by using '--driver=vmware'. This may require first deleting your existing cluster Doc:https://minikube.sigs.k8s.io/docs/drivers/vmware/ Version:} I0105 11:06:26.943511 30321 global.go:119] hyperkit default: true priority: 8, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0105 11:06:26.943667 30321 global.go:119] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/ Version:} I0105 11:06:26.943928 30321 global.go:119] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0105 11:06:26.943941 30321 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0105 11:06:26.944109 30321 global.go:119] vmware default: true priority: 7, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0105 11:06:26.944295 30321 driver.go:300] not recommending "ssh" due to default: false I0105 11:06:26.944300 30321 driver.go:295] not recommending "docker" due to health: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? I0105 11:06:26.944319 30321 driver.go:335] Picked: hyperkit I0105 11:06:26.944328 30321 driver.go:336] Alternatives: [vmware virtualbox ssh] I0105 11:06:26.944333 30321 driver.go:337] Rejects: [docker qemu2 vmwarefusion parallels podman] I0105 11:06:26.955230 30321 out.go:177] โœจ Automatically selected the hyperkit driver. Other choices: vmware, virtualbox, ssh I0105 11:06:26.983267 30321 start.go:282] selected driver: hyperkit I0105 11:06:26.983288 30321 start.go:808] validating driver "hyperkit" against I0105 11:06:26.983328 30321 start.go:819] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0105 11:06:26.983652 30321 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0105 11:06:26.983912 30321 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Volumes/data/.minikube/bin:/usr/local/opt/go/libexec/bin:/Users/rogermm/go/bin:/Users/rogermm/.asdf/shims:/usr/local/opt/asdf/libexec/bin:/Users/rogermm/.krew/bin:/Users/rogermm/.pyenv/shims:/Users/rogermm/.jenv/shims:/Users/rogermm/.jenv/bin:/Users/rogermm/.krew/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/VMware Fusion.app/Contents/Public:/Library/Apple/usr/bin:/usr/local/bin:/Users/rogermm/bin:/Users/rogermm/go/bin:/opt/s3cmd W0105 11:06:26.983988 30321 install.go:62] docker-machine-driver-hyperkit: exec: "docker-machine-driver-hyperkit": executable file not found in $PATH I0105 11:06:26.989486 30321 out.go:177] ๐Ÿ’พ Downloading driver docker-machine-driver-hyperkit: I0105 11:06:26.994768 30321 download.go:101] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit-amd64.sha256 -> /Volumes/data/.minikube/bin/docker-machine-driver-hyperkit I0105 11:06:27.633730 30321 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/Volumes/data/.minikube/bin/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x103482e00 0x103482e00 0x103482e00 0x103482e00 0x103482e00 0x103482e00 0x103482e00] Decompressors:map[bz2:0x103482e00 gz:0x103482e00 tar:0x103482e00 tar.bz2:0x103482e00 tar.gz:0x103482e00 tar.xz:0x103482e00 tar.zst:0x103482e00 tbz2:0x103482e00 tgz:0x103482e00 txz:0x103482e00 tzst:0x103482e00 xz:0x103482e00 zip:0x103482e00 zst:0x103482e00] Getters:map[file:0xc0007b3d40 http:0xc000113a40 https:0xc000113a90] Dir:false ProgressListener:0x10343e860 Insecure:false DisableSymlinks:false Options:[0x101678700]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version I0105 11:06:27.633795 30321 download.go:101] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit.sha256 -> /Volumes/data/.minikube/bin/docker-machine-driver-hyperkit W0105 11:06:38.940282 30321 out.go:239] โ— Unable to update hyperkit driver: download: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit.sha256 Dst:/Volumes/data/.minikube/bin/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x103482e00 0x103482e00 0x103482e00 0x103482e00 0x103482e00 0x103482e00 0x103482e00] Decompressors:map[bz2:0x103482e00 gz:0x103482e00 tar:0x103482e00 tar.bz2:0x103482e00 tar.gz:0x103482e00 tar.xz:0x103482e00 tar.zst:0x103482e00 tbz2:0x103482e00 tgz:0x103482e00 txz:0x103482e00 tzst:0x103482e00 xz:0x103482e00 zip:0x103482e00 zst:0x103482e00] Getters:map[file:0xc001139210 http:0xc00089d590 https:0xc00089d5e0] Dir:false ProgressListener:0x10343e860 Insecure:false DisableSymlinks:false Options:[0x101678700]}: invalid checksum: Error downloading checksum file: bad response code: 503 I0105 11:06:38.940390 30321 start_flags.go:303] no existing cluster config was found, will generate one from the flags I0105 11:06:38.941006 30321 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=131072MB, container=0MB I0105 11:06:38.941336 30321 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true] I0105 11:06:38.941369 30321 cni.go:95] Creating CNI manager for "" I0105 11:06:38.941382 30321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0105 11:06:38.941432 30321 start_flags.go:317] config: {Name:cluster-unfixed KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cluster-unfixed Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.xpt ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I0105 11:06:38.941851 30321 iso.go:124] acquiring lock: {Name:mkba4bd0feff696f37cbdd8fa2dfd9ff3b99b70d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0105 11:06:38.955345 30321 out.go:177] ๐Ÿ’ฟ Downloading VM boot image ... I0105 11:06:38.964907 30321 download.go:101] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso.sha256 -> /Volumes/data/.minikube/cache/iso/amd64/minikube-v1.28.0-amd64.iso I0105 11:06:46.407761 30321 out.go:177] ๐Ÿ‘ Starting control plane node cluster-unfixed in cluster cluster-unfixed I0105 11:06:46.415565 30321 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0105 11:06:46.567337 30321 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 I0105 11:06:46.573934 30321 cache.go:57] Caching tarball of preloaded images I0105 11:06:46.574094 30321 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0105 11:06:46.581953 30321 out.go:177] ๐Ÿ’พ Downloading Kubernetes v1.25.3 preload ... I0105 11:06:46.586595 30321 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ... I0105 11:06:46.832969 30321 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /Volumes/data/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 I0105 11:06:55.770346 30321 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ... I0105 11:06:55.770526 30321 preload.go:256] verifying checksum of /Volumes/data/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ... I0105 11:06:56.578260 30321 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker I0105 11:06:56.578545 30321 profile.go:148] Saving config to /Volumes/data/.minikube/profiles/cluster-unfixed/config.json ... I0105 11:06:56.578576 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/profiles/cluster-unfixed/config.json: {Name:mkb3d0dfcef549473310d347a20b61abd301758e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:06:56.578950 30321 cache.go:208] Successfully downloaded all kic artifacts I0105 11:06:56.578982 30321 start.go:364] acquiring machines lock for cluster-unfixed: {Name:mk808c95ad35b27da99d888e51755d9a75d3573f Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0105 11:06:56.579058 30321 start.go:368] acquired machines lock for "cluster-unfixed" in 67.643ยตs I0105 11:06:56.579085 30321 start.go:93] Provisioning new machine with config: &{Name:cluster-unfixed KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cluster-unfixed Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.xpt ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0105 11:06:56.579143 30321 start.go:125] createHost starting for "" (driver="hyperkit") I0105 11:06:56.593407 30321 out.go:204] ๐Ÿ”ฅ Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ... I0105 11:06:56.594350 30321 start.go:128] duration metric: createHost completed in 15.065379ms I0105 11:06:56.594356 30321 start.go:83] releasing machines lock for "cluster-unfixed", held for 15.293975ms W0105 11:06:56.594368 30321 start.go:603] error starting host: new host: Driver "hyperkit" not found. Do you have the plugin binary "docker-machine-driver-hyperkit" accessible in your PATH? I0105 11:06:56.594477 30321 cli_runner.go:164] Run: docker container inspect cluster-unfixed --format={{.State.Status}} W0105 11:06:56.671649 30321 cli_runner.go:211] docker container inspect cluster-unfixed --format={{.State.Status}} returned with exit code 1 I0105 11:06:56.671732 30321 delete.go:46] couldn't inspect container "cluster-unfixed" before deleting: unknown state "cluster-unfixed": docker container inspect cluster-unfixed --format={{.State.Status}}: exit status 1 stdout: stderr: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? I0105 11:06:56.671999 30321 cli_runner.go:164] Run: podman container inspect cluster-unfixed --format={{.State.Status}} I0105 11:06:56.672076 30321 delete.go:46] couldn't inspect container "cluster-unfixed" before deleting: unknown state "cluster-unfixed": podman container inspect cluster-unfixed --format={{.State.Status}}: exec: "podman": executable file not found in $PATH stdout: stderr: W0105 11:06:56.672097 30321 start.go:608] delete host: Docker machine "cluster-unfixed" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one. W0105 11:06:56.672327 30321 out.go:239] ๐Ÿคฆ StartHost failed, but will try again: new host: Driver "hyperkit" not found. Do you have the plugin binary "docker-machine-driver-hyperkit" accessible in your PATH? I0105 11:06:56.672363 30321 start.go:618] Will try again in 5 seconds ... I0105 11:07:01.672406 30321 start.go:364] acquiring machines lock for cluster-unfixed: {Name:mk808c95ad35b27da99d888e51755d9a75d3573f Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0105 11:07:01.672591 30321 start.go:368] acquired machines lock for "cluster-unfixed" in 162.299ยตs I0105 11:07:01.672663 30321 start.go:93] Provisioning new machine with config: &{Name:cluster-unfixed KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cluster-unfixed Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.xpt ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0105 11:07:01.672793 30321 start.go:125] createHost starting for "" (driver="hyperkit") I0105 11:07:01.684713 30321 out.go:204] ๐Ÿ”ฅ Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ... I0105 11:07:01.685164 30321 start.go:128] duration metric: createHost completed in 12.362204ms I0105 11:07:01.685173 30321 start.go:83] releasing machines lock for "cluster-unfixed", held for 12.573795ms W0105 11:07:01.685443 30321 out.go:239] ๐Ÿ˜ฟ Failed to start hyperkit VM. Running "minikube delete -p cluster-unfixed" may fix it: new host: Driver "hyperkit" not found. Do you have the plugin binary "docker-machine-driver-hyperkit" accessible in your PATH? W0105 11:07:01.685690 30321 out.go:239] โ— Startup with hyperkit driver failed, trying with alternate driver vmware: Failed to start host: new host: Driver "hyperkit" not found. Do you have the plugin binary "docker-machine-driver-hyperkit" accessible in your PATH? I0105 11:07:01.685992 30321 config.go:180] Loaded profile config "cluster-unfixed": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I0105 11:07:01.686012 30321 delete.go:325] Deleting cluster-unfixed I0105 11:07:01.686018 30321 delete.go:330] cluster-unfixed configuration: &{Name:cluster-unfixed KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cluster-unfixed Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.xpt ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I0105 11:07:01.686240 30321 config.go:180] Loaded profile config "cluster-unfixed": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I0105 11:07:01.686387 30321 cli_runner.go:164] Run: docker container inspect cluster-unfixed --format={{.State.Status}} W0105 11:07:01.776558 30321 cli_runner.go:211] docker container inspect cluster-unfixed --format={{.State.Status}} returned with exit code 1 I0105 11:07:01.776623 30321 delete.go:46] couldn't inspect container "cluster-unfixed" before deleting: unknown state "cluster-unfixed": docker container inspect cluster-unfixed --format={{.State.Status}}: exit status 1 stdout: stderr: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? I0105 11:07:01.776876 30321 cli_runner.go:164] Run: podman container inspect cluster-unfixed --format={{.State.Status}} I0105 11:07:01.776895 30321 delete.go:46] couldn't inspect container "cluster-unfixed" before deleting: unknown state "cluster-unfixed": podman container inspect cluster-unfixed --format={{.State.Status}}: exec: "podman": executable file not found in $PATH stdout: stderr: I0105 11:07:01.776921 30321 delete.go:432] Host cluster-unfixed does not exist. Proceeding ahead with cleanup. I0105 11:07:01.779278 30321 lock.go:35] WriteFile acquiring /Users/rogermm/.kube/config: {Name:mk8a54556fcf5b2efd0025fe19350844cba96251 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:07:01.783237 30321 out.go:177] ๐Ÿ’€ Removed all traces of the "cluster-unfixed" cluster. I0105 11:07:01.787937 30321 start.go:282] selected driver: vmware I0105 11:07:01.787960 30321 start.go:808] validating driver "vmware" against I0105 11:07:01.787976 30321 start.go:819] status for vmware: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0105 11:07:01.788124 30321 start_flags.go:303] no existing cluster config was found, will generate one from the flags I0105 11:07:01.788200 30321 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=131072MB, container=0MB I0105 11:07:01.788422 30321 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true] I0105 11:07:01.788437 30321 cni.go:95] Creating CNI manager for "" I0105 11:07:01.788444 30321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0105 11:07:01.788453 30321 start_flags.go:317] config: {Name:cluster-unfixed KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:vmware HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cluster-unfixed Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.xpt ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I0105 11:07:01.788586 30321 iso.go:124] acquiring lock: {Name:mkba4bd0feff696f37cbdd8fa2dfd9ff3b99b70d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0105 11:07:01.793431 30321 out.go:177] ๐Ÿ‘ Starting control plane node cluster-unfixed in cluster cluster-unfixed I0105 11:07:01.801567 30321 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0105 11:07:01.801837 30321 cache.go:57] Caching tarball of preloaded images I0105 11:07:01.801985 30321 preload.go:174] Found /Volumes/data/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0105 11:07:01.801996 30321 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker I0105 11:07:01.802076 30321 profile.go:148] Saving config to /Volumes/data/.minikube/profiles/cluster-unfixed/config.json ... I0105 11:07:01.802200 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/profiles/cluster-unfixed/config.json: {Name:mkb3d0dfcef549473310d347a20b61abd301758e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:07:01.802502 30321 cache.go:208] Successfully downloaded all kic artifacts I0105 11:07:01.802516 30321 start.go:364] acquiring machines lock for cluster-unfixed: {Name:mkb90012c35e4ffa6233199eab6d9cfa46dc208a Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0105 11:07:01.802751 30321 start.go:368] acquired machines lock for "cluster-unfixed" in 226.016ยตs I0105 11:07:01.802782 30321 start.go:93] Provisioning new machine with config: &{Name:cluster-unfixed KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:vmware HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cluster-unfixed Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.xpt ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0105 11:07:01.802825 30321 start.go:125] createHost starting for "" (driver="vmware") I0105 11:07:01.805788 30321 out.go:204] ๐Ÿ”ฅ Creating vmware VM (CPUs=2, Memory=6000MB, Disk=20000MB) ... I0105 11:07:01.806054 30321 main.go:134] libmachine: Found binary path at /Applications/VMware Fusion.app/Contents/Public/docker-machine-driver-vmware I0105 11:07:01.806386 30321 main.go:134] libmachine: Launching plugin server for driver vmware I0105 11:07:01.899543 30321 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52880 I0105 11:07:01.903402 30321 main.go:134] libmachine: () Calling .GetVersion I0105 11:07:01.905614 30321 main.go:134] libmachine: Using API Version 1 I0105 11:07:01.905631 30321 main.go:134] libmachine: () Calling .SetConfigRaw I0105 11:07:01.907037 30321 main.go:134] libmachine: () Calling .GetMachineName I0105 11:07:01.907190 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetMachineName I0105 11:07:01.907332 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:07:01.907498 30321 start.go:159] libmachine.API.Create for "cluster-unfixed" (driver="vmware") I0105 11:07:01.907529 30321 client.go:168] LocalClient.Create starting I0105 11:07:01.907606 30321 main.go:134] libmachine: Creating CA: /Volumes/data/.minikube/certs/ca.pem I0105 11:07:02.012800 30321 main.go:134] libmachine: Creating client certificate: /Volumes/data/.minikube/certs/cert.pem I0105 11:07:02.412764 30321 main.go:134] libmachine: Running pre-create checks... I0105 11:07:02.412771 30321 main.go:134] libmachine: (cluster-unfixed) Calling .PreCreateCheck I0105 11:07:02.413263 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetConfigRaw I0105 11:07:02.414214 30321 main.go:134] libmachine: Creating machine... I0105 11:07:02.414221 30321 main.go:134] libmachine: (cluster-unfixed) Calling .Create I0105 11:07:02.415200 30321 main.go:134] libmachine: (cluster-unfixed) Downloading /Volumes/data/.minikube/cache/boot2docker.iso from file:///Volumes/data/.minikube/cache/iso/amd64/minikube-v1.28.0-amd64.iso... I0105 11:07:02.618486 30321 main.go:134] libmachine: (cluster-unfixed) Creating SSH key... I0105 11:07:02.694049 30321 main.go:134] libmachine: (cluster-unfixed) Creating VM... I0105 11:07:02.928252 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Creating disk '/Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmdk' I0105 11:07:02.928268 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Virtual disk creation successful. I0105 11:07:02.928919 30321 main.go:134] libmachine: (cluster-unfixed) Starting cluster-unfixed... I0105 11:07:02.928960 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun start /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx nogui I0105 11:07:13.013184 30321 main.go:134] libmachine: (cluster-unfixed) DBG | 2023-01-05T11:07:13.012| ServiceImpl_Opener: PID 30346 I0105 11:07:25.173891 30321 main.go:134] libmachine: (cluster-unfixed) Waiting for VM to come online... I0105 11:07:25.173964 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:25.457699 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:25.457730 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:25.461145 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:25.461661 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:25.461846 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:25.461893 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:25.462005 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:25.462105 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:25.462246 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:25.462695 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Not there yet 1/60, error: IP not found for MAC 00:0c:29:91:0a:45 in DHCP leases I0105 11:07:27.467003 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:27.755504 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:27.755521 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:27.758798 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:27.759096 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:27.759231 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:27.759246 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:27.759433 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:27.759535 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:27.759667 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:27.759781 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Not there yet 2/60, error: IP not found for MAC 00:0c:29:91:0a:45 in DHCP leases I0105 11:07:29.761652 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:30.041706 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:30.041720 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:30.045687 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:30.046018 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:30.046170 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:30.046180 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:30.046275 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:30.046363 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:30.046478 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:30.048811 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Not there yet 3/60, error: IP not found for MAC 00:0c:29:91:0a:45 in DHCP leases I0105 11:07:32.051600 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:32.339115 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:32.339148 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:32.342593 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:32.342891 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:32.343027 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:32.343048 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:32.343114 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:32.343222 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:32.343328 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:32.343410 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Not there yet 4/60, error: IP not found for MAC 00:0c:29:91:0a:45 in DHCP leases I0105 11:07:34.343717 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:34.636056 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:34.636074 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:34.639714 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:34.639963 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:34.640108 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:34.640119 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:34.640196 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:34.640285 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:34.640388 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:34.640500 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Not there yet 5/60, error: IP not found for MAC 00:0c:29:91:0a:45 in DHCP leases I0105 11:07:36.640771 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:36.944099 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:36.944115 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:36.947817 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:36.948063 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:36.948194 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:36.948206 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:36.948297 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:36.948379 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:36.948472 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:36.948598 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Not there yet 6/60, error: IP not found for MAC 00:0c:29:91:0a:45 in DHCP leases I0105 11:07:38.949015 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:39.268842 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:39.268876 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:39.273571 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:39.273880 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:39.274003 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:39.274015 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:39.274141 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:39.274244 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:39.274364 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:39.274505 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Not there yet 7/60, error: IP not found for MAC 00:0c:29:91:0a:45 in DHCP leases I0105 11:07:41.276197 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:41.579219 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:41.579236 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:41.583172 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:41.583448 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:41.583575 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:41.583585 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:41.583742 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:41.583842 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:41.583955 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:41.584075 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:41.584085 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Got an ip: 192.168.252.129 I0105 11:07:41.584909 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Creating Tar key bundle... I0105 11:07:41.585508 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun -gu docker -gp tcuser directoryExistsInGuest /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx /var/lib/boot2docker I0105 11:07:41.941027 30321 main.go:134] libmachine: (cluster-unfixed) DBG | The directory exists. I0105 11:07:41.945154 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun -gu docker -gp tcuser CopyFileFromHostToGuest /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx /Volumes/data/.minikube/machines/cluster-unfixed/userdata.tar /home/docker/userdata.tar I0105 11:07:42.423970 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun -gu docker -gp tcuser runScriptInGuest /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx /bin/sh sudo sh -c "tar xvf /home/docker/userdata.tar -C /home/docker > /var/log/userdata.log 2>&1 && chown -R docker:staff /home/docker" I0105 11:07:43.823031 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun -gu docker -gp tcuser runScriptInGuest /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx /bin/sh sudo /bin/mv /home/docker/userdata.tar /var/lib/boot2docker/userdata.tar I0105 11:07:45.211640 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun -gu docker -gp tcuser enableSharedFolders /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:45.601177 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun -gu docker -gp tcuser addSharedFolder /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx Users /Users I0105 11:07:45.987149 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun -gu docker -gp tcuser runScriptInGuest /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx /bin/sh [ ! -d /hosthome ]&& sudo mkdir /hosthome; sudo mount --bind /mnt/hgfs//hosthome /hosthome || [ -f /usr/local/bin/vmhgfs-fuse ]&& sudo /usr/local/bin/vmhgfs-fuse -o allow_other .host:/Users /hosthome || sudo mount -t vmhgfs -o uid=$(id -u),gid=$(id -g) .host:/Users /hosthome I0105 11:07:47.374483 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Guest program exited with non-zero exit code: 1 I0105 11:07:47.377983 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetConfigRaw I0105 11:07:47.378868 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:07:47.379042 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:07:47.379192 30321 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes... I0105 11:07:47.379211 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetState I0105 11:07:47.379489 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:47.677243 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:47.677261 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:47.680864 30321 main.go:134] libmachine: Detecting operating system of created instance... I0105 11:07:47.680879 30321 main.go:134] libmachine: Waiting for SSH to be available... I0105 11:07:47.680885 30321 main.go:134] libmachine: Getting to WaitForSSH function... I0105 11:07:47.680892 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:47.681147 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:47.991657 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:47.991671 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:47.995960 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:47.996579 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:47.996766 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:47.996779 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:47.996888 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:47.997017 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:47.997132 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:47.997247 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:47.997323 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:47.997509 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:47.997646 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:47.997855 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:47.998373 30321 main.go:134] libmachine: Using SSH client type: native I0105 11:07:47.998825 30321 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003edea0] 0x1003f1020 [] 0s} 192.168.252.129 22 } I0105 11:07:47.998833 30321 main.go:134] libmachine: About to run SSH command: exit 0 I0105 11:07:48.072208 30321 main.go:134] libmachine: SSH cmd err, output: : I0105 11:07:48.072220 30321 main.go:134] libmachine: Detecting the provisioner... I0105 11:07:48.072227 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:48.072529 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:48.394833 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:48.394850 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:48.398569 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:48.398845 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:48.398989 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:48.398999 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:48.399161 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:48.399253 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:48.399349 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:48.399496 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:48.399599 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:48.399797 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:48.399968 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:48.400125 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:48.400415 30321 main.go:134] libmachine: Using SSH client type: native I0105 11:07:48.400618 30321 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003edea0] 0x1003f1020 [] 0s} 192.168.252.129 22 } I0105 11:07:48.400640 30321 main.go:134] libmachine: About to run SSH command: cat /etc/os-release I0105 11:07:48.475275 30321 main.go:134] libmachine: SSH cmd err, output: : NAME=Buildroot VERSION=2021.02.12-1-gb347f1c-dirty ID=buildroot VERSION_ID=2021.02.12 PRETTY_NAME="Buildroot 2021.02.12" I0105 11:07:48.475362 30321 main.go:134] libmachine: found compatible host: buildroot I0105 11:07:48.475371 30321 main.go:134] libmachine: Provisioning with buildroot... I0105 11:07:48.475378 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetMachineName I0105 11:07:48.475626 30321 buildroot.go:166] provisioning hostname "cluster-unfixed" I0105 11:07:48.475639 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetMachineName I0105 11:07:48.475801 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:48.476011 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:48.759973 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:48.759989 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:48.763168 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:48.763554 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:48.763714 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:48.763740 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:48.763827 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:48.764013 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:48.764133 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:48.764253 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:48.764358 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:48.764536 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:48.764678 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:48.764896 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:48.765206 30321 main.go:134] libmachine: Using SSH client type: native I0105 11:07:48.765392 30321 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003edea0] 0x1003f1020 [] 0s} 192.168.252.129 22 } I0105 11:07:48.765400 30321 main.go:134] libmachine: About to run SSH command: sudo hostname cluster-unfixed && echo "cluster-unfixed" | sudo tee /etc/hostname I0105 11:07:48.851318 30321 main.go:134] libmachine: SSH cmd err, output: : cluster-unfixed I0105 11:07:48.851337 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:48.851573 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:49.135505 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:49.135521 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:49.139188 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:49.139426 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:49.139571 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:49.139582 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:49.139699 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:49.139794 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:49.139880 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:49.139988 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:49.140076 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:49.140267 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:49.140415 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:49.140580 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:49.140844 30321 main.go:134] libmachine: Using SSH client type: native I0105 11:07:49.141043 30321 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003edea0] 0x1003f1020 [] 0s} 192.168.252.129 22 } I0105 11:07:49.141072 30321 main.go:134] libmachine: About to run SSH command: if ! grep -xq '.*\scluster-unfixed' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cluster-unfixed/g' /etc/hosts; else echo '127.0.1.1 cluster-unfixed' | sudo tee -a /etc/hosts; fi fi I0105 11:07:49.221889 30321 main.go:134] libmachine: SSH cmd err, output: : I0105 11:07:49.221906 30321 buildroot.go:172] set auth options {CertDir:/Volumes/data/.minikube CaCertPath:/Volumes/data/.minikube/certs/ca.pem CaPrivateKeyPath:/Volumes/data/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Volumes/data/.minikube/machines/server.pem ServerKeyPath:/Volumes/data/.minikube/machines/server-key.pem ClientKeyPath:/Volumes/data/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Volumes/data/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Volumes/data/.minikube} I0105 11:07:49.221933 30321 buildroot.go:174] setting up certificates I0105 11:07:49.221947 30321 provision.go:83] configureAuth start I0105 11:07:49.221954 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetMachineName I0105 11:07:49.222145 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetIP I0105 11:07:49.222327 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:49.501153 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:49.501169 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:49.504593 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:49.504932 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:49.505061 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:49.505072 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:49.505214 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:49.505299 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:49.505428 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:49.505557 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:49.505642 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:49.505880 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:49.794744 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:49.794761 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:49.798577 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:49.798823 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:49.798948 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:49.798957 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:49.799055 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:49.799139 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:49.799236 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:49.799366 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:49.799459 30321 provision.go:138] copyHostCerts I0105 11:07:49.799582 30321 exec_runner.go:151] cp: /Volumes/data/.minikube/certs/key.pem --> /Volumes/data/.minikube/key.pem (1675 bytes) I0105 11:07:49.799931 30321 exec_runner.go:151] cp: /Volumes/data/.minikube/certs/ca.pem --> /Volumes/data/.minikube/ca.pem (1078 bytes) I0105 11:07:49.800222 30321 exec_runner.go:151] cp: /Volumes/data/.minikube/certs/cert.pem --> /Volumes/data/.minikube/cert.pem (1123 bytes) I0105 11:07:49.800443 30321 provision.go:112] generating server cert: /Volumes/data/.minikube/machines/server.pem ca-key=/Volumes/data/.minikube/certs/ca.pem private-key=/Volumes/data/.minikube/certs/ca-key.pem org=rogermm.cluster-unfixed san=[192.168.252.129 192.168.252.129 localhost 127.0.0.1 minikube cluster-unfixed] I0105 11:07:50.364731 30321 provision.go:172] copyRemoteCerts I0105 11:07:50.365377 30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0105 11:07:50.365397 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:50.365691 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:50.638063 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:50.638078 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:50.641457 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:50.641764 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:50.641900 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:50.641912 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:50.642053 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:50.642170 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:50.642269 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:50.642376 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:50.642481 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:50.642649 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:50.642823 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:50.643094 30321 sshutil.go:53] new ssh client: &{IP:192.168.252.129 Port:22 SSHKeyPath:/Volumes/data/.minikube/machines/cluster-unfixed/id_rsa Username:docker} I0105 11:07:50.690430 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0105 11:07:50.719847 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes) I0105 11:07:50.748476 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0105 11:07:50.777275 30321 provision.go:86] duration metric: configureAuth took 1.555405265s I0105 11:07:50.777286 30321 buildroot.go:189] setting minikube options for container-runtime I0105 11:07:50.777511 30321 config.go:180] Loaded profile config "cluster-unfixed": Driver=vmware, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I0105 11:07:50.777525 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:07:50.777728 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:50.778022 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:51.078573 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:51.078588 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:51.082190 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:51.082465 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:51.082596 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:51.082616 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:51.082734 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:51.082818 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:51.082923 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:51.083010 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:51.083169 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:51.083347 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:51.083508 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:51.083705 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:51.083996 30321 main.go:134] libmachine: Using SSH client type: native I0105 11:07:51.084177 30321 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003edea0] 0x1003f1020 [] 0s} 192.168.252.129 22 } I0105 11:07:51.084201 30321 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0105 11:07:51.158118 30321 main.go:134] libmachine: SSH cmd err, output: : tmpfs I0105 11:07:51.158127 30321 buildroot.go:70] root file system type: tmpfs I0105 11:07:51.158349 30321 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0105 11:07:51.158370 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:51.158637 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:51.451421 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:51.451438 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:51.455386 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:51.455653 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:51.455794 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:51.455820 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:51.455882 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:51.455975 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:51.456087 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:51.456225 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:51.456320 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:51.456507 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:51.456699 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:51.456854 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:51.457170 30321 main.go:134] libmachine: Using SSH client type: native I0105 11:07:51.457333 30321 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003edea0] 0x1003f1020 [] 0s} 192.168.252.129 22 } I0105 11:07:51.457418 30321 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=vmware --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0105 11:07:51.545100 30321 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=vmware --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0105 11:07:51.545132 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:51.545445 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:51.852755 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:51.852779 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:51.856851 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:51.857188 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:51.858162 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:51.858179 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:51.858306 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:51.858436 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:51.858561 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:51.858673 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:51.858758 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:51.858946 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:51.859109 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:51.859271 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:51.859551 30321 main.go:134] libmachine: Using SSH client type: native I0105 11:07:51.859751 30321 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003edea0] 0x1003f1020 [] 0s} 192.168.252.129 22 } I0105 11:07:51.859767 30321 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0105 11:07:52.613211 30321 main.go:134] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory Created symlink /etc/systemd/system/multi-user.target.wants/docker.service โ†’ /usr/lib/systemd/system/docker.service. I0105 11:07:52.613225 30321 main.go:134] libmachine: Checking connection to Docker... I0105 11:07:52.613232 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetURL I0105 11:07:52.613508 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:52.907743 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:52.907757 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:52.911214 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:52.911506 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:52.911628 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:52.911639 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:52.911731 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:52.911830 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:52.911931 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:52.912053 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:52.912128 30321 main.go:134] libmachine: Docker is up and running! I0105 11:07:52.912135 30321 main.go:134] libmachine: Reticulating splines... I0105 11:07:52.912139 30321 client.go:171] LocalClient.Create took 51.007666342s I0105 11:07:52.912153 30321 start.go:167] duration metric: libmachine.API.Create for "cluster-unfixed" took 51.007716479s I0105 11:07:52.912159 30321 start.go:300] post-start starting for "cluster-unfixed" (driver="vmware") I0105 11:07:52.912172 30321 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0105 11:07:52.912185 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:07:52.912571 30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0105 11:07:52.912592 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:52.912807 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:53.231627 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:53.231646 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:53.235285 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:53.235555 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:53.235688 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:53.235699 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:53.235778 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:53.235881 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:53.235976 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:53.236085 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:53.236161 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:53.236361 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:53.236505 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:53.236689 30321 sshutil.go:53] new ssh client: &{IP:192.168.252.129 Port:22 SSHKeyPath:/Volumes/data/.minikube/machines/cluster-unfixed/id_rsa Username:docker} I0105 11:07:53.288356 30321 ssh_runner.go:195] Run: cat /etc/os-release I0105 11:07:53.293485 30321 info.go:137] Remote host: Buildroot 2021.02.12 I0105 11:07:53.293519 30321 filesync.go:126] Scanning /Volumes/data/.minikube/addons for local assets ... I0105 11:07:53.293694 30321 filesync.go:126] Scanning /Volumes/data/.minikube/files for local assets ... I0105 11:07:53.293769 30321 start.go:303] post-start completed in 381.628217ms I0105 11:07:53.293794 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetConfigRaw I0105 11:07:53.294583 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetIP I0105 11:07:53.294793 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:53.604485 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:53.604505 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:53.607923 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:53.608236 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:53.608386 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:53.608397 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:53.608523 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:53.608628 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:53.608764 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:53.608876 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:53.609093 30321 profile.go:148] Saving config to /Volumes/data/.minikube/profiles/cluster-unfixed/config.json ... I0105 11:07:53.609591 30321 start.go:128] duration metric: createHost completed in 51.809866733s I0105 11:07:53.609619 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:53.609861 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:53.920798 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:53.920811 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:53.924347 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:53.924605 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:53.924760 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:53.924789 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:53.924911 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:53.925009 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:53.925129 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:53.925230 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:53.925316 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:53.925544 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:53.925725 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:53.925898 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:53.926194 30321 main.go:134] libmachine: Using SSH client type: native I0105 11:07:53.926395 30321 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003edea0] 0x1003f1020 [] 0s} 192.168.252.129 22 } I0105 11:07:53.926407 30321 main.go:134] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I0105 11:07:54.003726 30321 main.go:134] libmachine: SSH cmd err, output: : 1672945674.489901240 I0105 11:07:54.003735 30321 fix.go:207] guest clock: 1672945674.489901240 I0105 11:07:54.003742 30321 fix.go:220] Guest: 2023-01-05 11:07:54.48990124 -0800 PST Remote: 2023-01-05 11:07:53.609607 -0800 PST m=+86.944883269 (delta=880.29424ms) I0105 11:07:54.003766 30321 fix.go:191] guest clock delta is within tolerance: 880.29424ms I0105 11:07:54.003771 30321 start.go:83] releasing machines lock for "cluster-unfixed", held for 52.204145111s I0105 11:07:54.003795 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:07:54.004001 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetIP I0105 11:07:54.004222 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:54.315861 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:54.315878 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:54.319463 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:54.319697 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:54.319827 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:54.319838 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:54.319948 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:54.320025 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:54.320130 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:54.320249 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:54.320365 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:07:54.321022 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:07:54.321197 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:07:54.321592 30321 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0105 11:07:54.321691 30321 ssh_runner.go:195] Run: systemctl --version I0105 11:07:54.321704 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:54.321833 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:07:54.321914 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:54.322042 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:07:54.684206 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:54.684224 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:54.684287 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:07:54.684298 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:07:54.688674 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:54.688758 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:07:54.688902 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:54.688981 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:07:54.689024 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:54.689040 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:54.689124 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:07:54.689141 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:07:54.689159 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:54.689232 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:07:54.689243 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:54.689313 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:07:54.689327 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:54.689407 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:07:54.689457 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:54.689514 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:07:54.689555 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:54.689560 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:07:54.689764 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:54.689791 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:07:54.690198 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:54.690229 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:07:54.690381 30321 sshutil.go:53] new ssh client: &{IP:192.168.252.129 Port:22 SSHKeyPath:/Volumes/data/.minikube/machines/cluster-unfixed/id_rsa Username:docker} I0105 11:07:54.690401 30321 sshutil.go:53] new ssh client: &{IP:192.168.252.129 Port:22 SSHKeyPath:/Volumes/data/.minikube/machines/cluster-unfixed/id_rsa Username:docker} I0105 11:07:54.895730 30321 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0105 11:07:54.895928 30321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0105 11:07:54.938015 30321 docker.go:613] Got preloaded images: I0105 11:07:54.938022 30321 docker.go:619] registry.k8s.io/kube-apiserver:v1.25.3 wasn't preloaded I0105 11:07:54.938169 30321 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0105 11:07:54.950423 30321 ssh_runner.go:195] Run: which lz4 I0105 11:07:54.955515 30321 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0105 11:07:54.960504 30321 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout: stderr: stat: cannot statx '/preloaded.tar.lz4': No such file or directory I0105 11:07:54.960533 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (404166592 bytes) I0105 11:08:06.944569 30321 docker.go:577] Took 11.989932 seconds to copy over tarball I0105 11:08:06.944740 30321 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4 I0105 11:08:14.268074 30321 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (7.32374172s) I0105 11:08:14.268087 30321 ssh_runner.go:146] rm: /preloaded.tar.lz4 I0105 11:08:14.313404 30321 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0105 11:08:14.327760 30321 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes) I0105 11:08:14.350262 30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0105 11:08:14.492484 30321 ssh_runner.go:195] Run: sudo systemctl restart docker I0105 11:08:16.240361 30321 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.747959669s) I0105 11:08:16.240582 30321 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0105 11:08:16.256961 30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0105 11:08:16.276339 30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0105 11:08:16.291933 30321 ssh_runner.go:195] Run: sudo systemctl stop -f crio I0105 11:08:16.326973 30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0105 11:08:16.344814 30321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0105 11:08:16.369002 30321 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0105 11:08:16.513934 30321 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0105 11:08:16.660825 30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0105 11:08:16.791383 30321 ssh_runner.go:195] Run: sudo systemctl restart docker I0105 11:08:18.203017 30321 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.411689192s) I0105 11:08:18.203195 30321 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0105 11:08:18.338706 30321 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0105 11:08:18.475123 30321 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0105 11:08:18.497242 30321 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock I0105 11:08:18.497426 30321 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0105 11:08:18.504840 30321 start.go:472] Will wait 60s for crictl version I0105 11:08:18.504956 30321 ssh_runner.go:195] Run: sudo crictl version I0105 11:08:18.674719 30321 start.go:481] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.20 RuntimeApiVersion: 1.41.0 I0105 11:08:18.674874 30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0105 11:08:18.712576 30321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0105 11:08:18.753507 30321 out.go:204] ๐Ÿณ Preparing Kubernetes v1.25.3 on Docker 20.10.20 ... I0105 11:08:18.753559 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetIP I0105 11:08:18.753875 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:08:19.057050 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:08:19.057068 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:08:19.060790 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:08:19.061115 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:08:19.061244 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:08:19.061255 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:08:19.061369 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:08:19.061489 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:08:19.061610 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:08:19.061775 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:08:19.062158 30321 ssh_runner.go:195] Run: grep 192.168.252.1 host.minikube.internal$ /etc/hosts I0105 11:08:19.068020 30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.252.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0105 11:08:19.086092 30321 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0105 11:08:19.086209 30321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0105 11:08:19.121500 30321 docker.go:613] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0105 11:08:19.121512 30321 docker.go:543] Images already preloaded, skipping extraction I0105 11:08:19.121673 30321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0105 11:08:19.153583 30321 docker.go:613] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0105 11:08:19.153607 30321 cache_images.go:84] Images are preloaded, skipping loading I0105 11:08:19.153737 30321 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0105 11:08:19.201447 30321 cni.go:95] Creating CNI manager for "" I0105 11:08:19.201461 30321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0105 11:08:19.201482 30321 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0105 11:08:19.201499 30321 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.252.129 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cluster-unfixed NodeName:cluster-unfixed DNSDomain:cluster.xpt CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.252.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.252.129 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false} I0105 11:08:19.201639 30321 kubeadm.go:161] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.252.129 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "cluster-unfixed" kubeletExtraArgs: node-ip: 192.168.252.129 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.252.129"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.25.3 networking: dnsDomain: cluster.xpt podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.xpt" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0105 11:08:19.202125 30321 kubeadm.go:962] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cluster-unfixed --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.252.129 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.25.3 ClusterName:cluster-unfixed Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.xpt ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0105 11:08:19.202314 30321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3 I0105 11:08:19.215835 30321 binaries.go:44] Found k8s binaries, skipping transfer I0105 11:08:19.215989 30321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0105 11:08:19.226379 30321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes) I0105 11:08:19.249858 30321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0105 11:08:19.274250 30321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2042 bytes) I0105 11:08:19.299405 30321 ssh_runner.go:195] Run: grep 192.168.252.129 control-plane.minikube.internal$ /etc/hosts I0105 11:08:19.304988 30321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.252.129 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0105 11:08:19.322399 30321 certs.go:54] Setting up /Volumes/data/.minikube/profiles/cluster-unfixed for IP: 192.168.252.129 I0105 11:08:19.322442 30321 certs.go:187] generating minikubeCA CA: /Volumes/data/.minikube/ca.key I0105 11:08:19.455026 30321 crypto.go:156] Writing cert to /Volumes/data/.minikube/ca.crt ... I0105 11:08:19.455040 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/ca.crt: {Name:mk4401ba39affbc4c7e58c0f611cf5ebf8383adc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:19.455379 30321 crypto.go:164] Writing key to /Volumes/data/.minikube/ca.key ... I0105 11:08:19.455386 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/ca.key: {Name:mk1afc7bb292cd1963c39e27f56424f43b8d6cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:19.455654 30321 certs.go:187] generating proxyClientCA CA: /Volumes/data/.minikube/proxy-client-ca.key I0105 11:08:19.750452 30321 crypto.go:156] Writing cert to /Volumes/data/.minikube/proxy-client-ca.crt ... I0105 11:08:19.750464 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/proxy-client-ca.crt: {Name:mk27c063f5e11e936e52e8bac4aa750d8b736f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:19.750794 30321 crypto.go:164] Writing key to /Volumes/data/.minikube/proxy-client-ca.key ... I0105 11:08:19.750801 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/proxy-client-ca.key: {Name:mk71af80b6b97a3da325baae57cc3051fb88138c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:19.751139 30321 certs.go:302] generating minikube-user signed cert: /Volumes/data/.minikube/profiles/cluster-unfixed/client.key I0105 11:08:19.751157 30321 crypto.go:68] Generating cert /Volumes/data/.minikube/profiles/cluster-unfixed/client.crt with IP's: [] I0105 11:08:20.032443 30321 crypto.go:156] Writing cert to /Volumes/data/.minikube/profiles/cluster-unfixed/client.crt ... I0105 11:08:20.032456 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/profiles/cluster-unfixed/client.crt: {Name:mkd0a1cb92e85ff866e31958141e5cd8782154c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:20.032812 30321 crypto.go:164] Writing key to /Volumes/data/.minikube/profiles/cluster-unfixed/client.key ... I0105 11:08:20.032825 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/profiles/cluster-unfixed/client.key: {Name:mk1786ee2c55136418364517410840998a7b8a54 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:20.033146 30321 certs.go:302] generating minikube signed cert: /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.key.11df9ca5 I0105 11:08:20.033173 30321 crypto.go:68] Generating cert /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.crt.11df9ca5 with IP's: [192.168.252.129 10.96.0.1 127.0.0.1 10.0.0.1] I0105 11:08:20.213102 30321 crypto.go:156] Writing cert to /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.crt.11df9ca5 ... I0105 11:08:20.213116 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.crt.11df9ca5: {Name:mkebb3b1bee640836bd473b3656791395d1c596f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:20.213496 30321 crypto.go:164] Writing key to /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.key.11df9ca5 ... I0105 11:08:20.213503 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.key.11df9ca5: {Name:mkcc74be0a9849ce8b68720caa49c671363f35b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:20.213780 30321 certs.go:320] copying /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.crt.11df9ca5 -> /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.crt I0105 11:08:20.214004 30321 certs.go:324] copying /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.key.11df9ca5 -> /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.key I0105 11:08:20.214215 30321 certs.go:302] generating aggregator signed cert: /Volumes/data/.minikube/profiles/cluster-unfixed/proxy-client.key I0105 11:08:20.214240 30321 crypto.go:68] Generating cert /Volumes/data/.minikube/profiles/cluster-unfixed/proxy-client.crt with IP's: [] I0105 11:08:20.316087 30321 crypto.go:156] Writing cert to /Volumes/data/.minikube/profiles/cluster-unfixed/proxy-client.crt ... I0105 11:08:20.316099 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/profiles/cluster-unfixed/proxy-client.crt: {Name:mk59399a983d1c241e0d8852c7a42af80e34dbc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:20.316433 30321 crypto.go:164] Writing key to /Volumes/data/.minikube/profiles/cluster-unfixed/proxy-client.key ... I0105 11:08:20.316440 30321 lock.go:35] WriteFile acquiring /Volumes/data/.minikube/profiles/cluster-unfixed/proxy-client.key: {Name:mk8a411b5cdc764f68c46e1a5855ceec3ada3f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:20.317021 30321 certs.go:388] found cert: /Volumes/data/.minikube/certs/Volumes/data/.minikube/certs/ca-key.pem (1675 bytes) I0105 11:08:20.317086 30321 certs.go:388] found cert: /Volumes/data/.minikube/certs/Volumes/data/.minikube/certs/ca.pem (1078 bytes) I0105 11:08:20.317136 30321 certs.go:388] found cert: /Volumes/data/.minikube/certs/Volumes/data/.minikube/certs/cert.pem (1123 bytes) I0105 11:08:20.317187 30321 certs.go:388] found cert: /Volumes/data/.minikube/certs/Volumes/data/.minikube/certs/key.pem (1675 bytes) I0105 11:08:20.317672 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1395 bytes) I0105 11:08:20.350610 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/profiles/cluster-unfixed/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0105 11:08:20.381540 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/profiles/cluster-unfixed/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0105 11:08:20.412020 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/profiles/cluster-unfixed/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0105 11:08:20.442875 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0105 11:08:20.474512 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0105 11:08:20.505707 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0105 11:08:20.538668 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0105 11:08:20.571027 30321 ssh_runner.go:362] scp /Volumes/data/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0105 11:08:20.602364 30321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0105 11:08:20.626843 30321 ssh_runner.go:195] Run: openssl version I0105 11:08:20.634105 30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0105 11:08:20.647466 30321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0105 11:08:20.653609 30321 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 5 19:08 /usr/share/ca-certificates/minikubeCA.pem I0105 11:08:20.653745 30321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0105 11:08:20.661791 30321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0105 11:08:20.675049 30321 kubeadm.go:396] StartCluster: {Name:cluster-unfixed KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:vmware HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cluster-unfixed Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.xpt ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.252.129 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I0105 11:08:20.675268 30321 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0105 11:08:20.701138 30321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0105 11:08:20.714004 30321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0105 11:08:20.725696 30321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0105 11:08:20.737193 30321 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0105 11:08:20.737224 30321 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0105 11:08:20.789782 30321 kubeadm.go:317] W0105 19:08:21.279056 1407 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! I0105 11:08:20.961565 30321 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0105 11:08:45.689080 30321 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3 I0105 11:08:45.689149 30321 kubeadm.go:317] [preflight] Running pre-flight checks I0105 11:08:45.689251 30321 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster I0105 11:08:45.689406 30321 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection I0105 11:08:45.689533 30321 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0105 11:08:45.689622 30321 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0105 11:08:45.691621 30321 out.go:204] โ–ช Generating certificates and keys ... I0105 11:08:45.691723 30321 kubeadm.go:317] [certs] Using existing ca certificate authority I0105 11:08:45.691830 30321 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk I0105 11:08:45.691921 30321 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key I0105 11:08:45.692011 30321 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key I0105 11:08:45.692088 30321 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key I0105 11:08:45.692153 30321 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key I0105 11:08:45.692220 30321 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key I0105 11:08:45.692372 30321 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [cluster-unfixed localhost] and IPs [192.168.252.129 127.0.0.1 ::1] I0105 11:08:45.692478 30321 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key I0105 11:08:45.692680 30321 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [cluster-unfixed localhost] and IPs [192.168.252.129 127.0.0.1 ::1] I0105 11:08:45.692773 30321 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key I0105 11:08:45.692874 30321 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key I0105 11:08:45.692934 30321 kubeadm.go:317] [certs] Generating "sa" key and public key I0105 11:08:45.693012 30321 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0105 11:08:45.693091 30321 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file I0105 11:08:45.693182 30321 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0105 11:08:45.693259 30321 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0105 11:08:45.693324 30321 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0105 11:08:45.693465 30321 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0105 11:08:45.693595 30321 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0105 11:08:45.693656 30321 kubeadm.go:317] [kubelet-start] Starting the kubelet I0105 11:08:45.693773 30321 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0105 11:08:45.698925 30321 out.go:204] โ–ช Booting up control plane ... I0105 11:08:45.699082 30321 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver" I0105 11:08:45.699169 30321 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0105 11:08:45.699262 30321 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler" I0105 11:08:45.699390 30321 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0105 11:08:45.699598 30321 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0105 11:08:45.699684 30321 kubeadm.go:317] [apiclient] All control plane components are healthy after 18.502791 seconds I0105 11:08:45.699852 30321 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0105 11:08:45.699991 30321 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0105 11:08:45.700136 30321 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs I0105 11:08:45.700367 30321 kubeadm.go:317] [mark-control-plane] Marking the node cluster-unfixed as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0105 11:08:45.700453 30321 kubeadm.go:317] [bootstrap-token] Using token: s5hzc3.36gs06wepb4n7lal I0105 11:08:45.710948 30321 out.go:204] โ–ช Configuring RBAC rules ... I0105 11:08:45.711113 30321 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0105 11:08:45.711228 30321 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0105 11:08:45.711426 30321 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0105 11:08:45.711625 30321 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0105 11:08:45.711784 30321 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0105 11:08:45.711906 30321 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0105 11:08:45.712041 30321 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0105 11:08:45.712097 30321 kubeadm.go:317] [addons] Applied essential addon: CoreDNS I0105 11:08:45.712152 30321 kubeadm.go:317] [addons] Applied essential addon: kube-proxy I0105 11:08:45.712156 30321 kubeadm.go:317] I0105 11:08:45.712222 30321 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully! I0105 11:08:45.712225 30321 kubeadm.go:317] I0105 11:08:45.712332 30321 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user: I0105 11:08:45.712338 30321 kubeadm.go:317] I0105 11:08:45.712367 30321 kubeadm.go:317] mkdir -p $HOME/.kube I0105 11:08:45.712429 30321 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0105 11:08:45.712485 30321 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0105 11:08:45.712492 30321 kubeadm.go:317] I0105 11:08:45.712545 30321 kubeadm.go:317] Alternatively, if you are the root user, you can run: I0105 11:08:45.712548 30321 kubeadm.go:317] I0105 11:08:45.712631 30321 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf I0105 11:08:45.712638 30321 kubeadm.go:317] I0105 11:08:45.712716 30321 kubeadm.go:317] You should now deploy a pod network to the cluster. I0105 11:08:45.712807 30321 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0105 11:08:45.712916 30321 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0105 11:08:45.712920 30321 kubeadm.go:317] I0105 11:08:45.713019 30321 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities I0105 11:08:45.713113 30321 kubeadm.go:317] and service account keys on each node and then running the following as root: I0105 11:08:45.713117 30321 kubeadm.go:317] I0105 11:08:45.713215 30321 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token s5hzc3.36gs06wepb4n7lal \ I0105 11:08:45.713351 30321 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:9d7d82e24258c3b76326566c6e8f41528af7bde2d00feb80fb28a4f6c055f883 \ I0105 11:08:45.713374 30321 kubeadm.go:317] --control-plane I0105 11:08:45.713380 30321 kubeadm.go:317] I0105 11:08:45.713492 30321 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root: I0105 11:08:45.713509 30321 kubeadm.go:317] I0105 11:08:45.713633 30321 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token s5hzc3.36gs06wepb4n7lal \ I0105 11:08:45.713795 30321 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:9d7d82e24258c3b76326566c6e8f41528af7bde2d00feb80fb28a4f6c055f883 I0105 11:08:45.713815 30321 cni.go:95] Creating CNI manager for "" I0105 11:08:45.713822 30321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0105 11:08:45.713850 30321 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0105 11:08:45.715200 30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0105 11:08:45.715230 30321 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=986b1ebd987211ed16f8cc10aed7d2c42fc8392f minikube.k8s.io/name=cluster-unfixed minikube.k8s.io/updated_at=2023_01_05T11_08_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0105 11:08:45.787588 30321 ops.go:34] apiserver oom_adj: -16 I0105 11:08:46.081589 30321 kubeadm.go:1067] duration metric: took 367.714957ms to wait for elevateKubeSystemPrivileges. I0105 11:08:46.081614 30321 kubeadm.go:398] StartCluster complete in 25.408094398s I0105 11:08:46.081633 30321 settings.go:142] acquiring lock: {Name:mkcdf0464e0fb8aafd2a27b08735c10b897a5457 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:46.081743 30321 settings.go:150] Updating kubeconfig: /Users/rogermm/.kube/config I0105 11:08:46.083175 30321 lock.go:35] WriteFile acquiring /Users/rogermm/.kube/config: {Name:mk8a54556fcf5b2efd0025fe19350844cba96251 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0105 11:08:46.608975 30321 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cluster-unfixed" rescaled to 1 I0105 11:08:46.609034 30321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0105 11:08:46.609056 30321 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.252.129 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0105 11:08:46.611152 30321 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0105 11:08:46.609103 30321 addons.go:486] enableAddons start: toEnable=map[], additional=[] I0105 11:08:46.609338 30321 config.go:180] Loaded profile config "cluster-unfixed": Driver=vmware, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I0105 11:08:46.611242 30321 addons.go:65] Setting storage-provisioner=true in profile "cluster-unfixed" I0105 11:08:46.611263 30321 addons.go:65] Setting default-storageclass=true in profile "cluster-unfixed" I0105 11:08:46.618144 30321 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cluster-unfixed" I0105 11:08:46.618188 30321 addons.go:227] Setting addon storage-provisioner=true in "cluster-unfixed" W0105 11:08:46.618195 30321 addons.go:236] addon storage-provisioner should already be in state true I0105 11:08:46.618344 30321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0105 11:08:46.618543 30321 host.go:66] Checking if "cluster-unfixed" exists ... I0105 11:08:46.619316 30321 main.go:134] libmachine: Found binary path at /Applications/VMware Fusion.app/Contents/Public/docker-machine-driver-vmware I0105 11:08:46.619332 30321 main.go:134] libmachine: Found binary path at /Applications/VMware Fusion.app/Contents/Public/docker-machine-driver-vmware I0105 11:08:46.619354 30321 main.go:134] libmachine: Launching plugin server for driver vmware I0105 11:08:46.619375 30321 main.go:134] libmachine: Launching plugin server for driver vmware I0105 11:08:46.643476 30321 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53010 I0105 11:08:46.643511 30321 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53009 I0105 11:08:46.644222 30321 main.go:134] libmachine: () Calling .GetVersion I0105 11:08:46.644288 30321 main.go:134] libmachine: () Calling .GetVersion I0105 11:08:46.644918 30321 main.go:134] libmachine: Using API Version 1 I0105 11:08:46.644934 30321 main.go:134] libmachine: () Calling .SetConfigRaw I0105 11:08:46.645009 30321 main.go:134] libmachine: Using API Version 1 I0105 11:08:46.645035 30321 main.go:134] libmachine: () Calling .SetConfigRaw I0105 11:08:46.645293 30321 main.go:134] libmachine: () Calling .GetMachineName I0105 11:08:46.645378 30321 main.go:134] libmachine: () Calling .GetMachineName I0105 11:08:46.645463 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetState I0105 11:08:46.645666 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:08:46.645987 30321 main.go:134] libmachine: Found binary path at /Applications/VMware Fusion.app/Contents/Public/docker-machine-driver-vmware I0105 11:08:46.646022 30321 main.go:134] libmachine: Launching plugin server for driver vmware I0105 11:08:46.656745 30321 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53015 I0105 11:08:46.657391 30321 main.go:134] libmachine: () Calling .GetVersion I0105 11:08:46.658027 30321 main.go:134] libmachine: Using API Version 1 I0105 11:08:46.658045 30321 main.go:134] libmachine: () Calling .SetConfigRaw I0105 11:08:46.658425 30321 main.go:134] libmachine: () Calling .GetMachineName I0105 11:08:46.658595 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetState I0105 11:08:46.658810 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:08:46.710296 30321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.252.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0105 11:08:46.712005 30321 api_server.go:51] waiting for apiserver process to appear ... I0105 11:08:46.712111 30321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0105 11:08:47.011942 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:08:47.012003 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:08:47.012160 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:08:47.012171 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:08:47.016372 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:08:47.021053 30321 out.go:177] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0105 11:08:47.021336 30321 addons.go:227] Setting addon default-storageclass=true in "cluster-unfixed" W0105 11:08:47.025096 30321 addons.go:236] addon default-storageclass should already be in state true I0105 11:08:47.025118 30321 host.go:66] Checking if "cluster-unfixed" exists ... I0105 11:08:47.025167 30321 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml I0105 11:08:47.025175 30321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0105 11:08:47.025187 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:08:47.025536 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:08:47.025731 30321 main.go:134] libmachine: Found binary path at /Applications/VMware Fusion.app/Contents/Public/docker-machine-driver-vmware I0105 11:08:47.025776 30321 main.go:134] libmachine: Launching plugin server for driver vmware I0105 11:08:47.038635 30321 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53019 I0105 11:08:47.039320 30321 main.go:134] libmachine: () Calling .GetVersion I0105 11:08:47.039981 30321 main.go:134] libmachine: Using API Version 1 I0105 11:08:47.039991 30321 main.go:134] libmachine: () Calling .SetConfigRaw I0105 11:08:47.040351 30321 main.go:134] libmachine: () Calling .GetMachineName I0105 11:08:47.041000 30321 main.go:134] libmachine: Found binary path at /Applications/VMware Fusion.app/Contents/Public/docker-machine-driver-vmware I0105 11:08:47.041033 30321 main.go:134] libmachine: Launching plugin server for driver vmware I0105 11:08:47.052579 30321 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53023 I0105 11:08:47.053312 30321 main.go:134] libmachine: () Calling .GetVersion I0105 11:08:47.053883 30321 main.go:134] libmachine: Using API Version 1 I0105 11:08:47.053898 30321 main.go:134] libmachine: () Calling .SetConfigRaw I0105 11:08:47.054265 30321 main.go:134] libmachine: () Calling .GetMachineName I0105 11:08:47.054443 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetState I0105 11:08:47.054668 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:08:47.378168 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:08:47.378192 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:08:47.379628 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:08:47.379646 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:08:47.381636 30321 main.go:134] libmachine: (cluster-unfixed) Calling .DriverName I0105 11:08:47.382193 30321 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml I0105 11:08:47.382201 30321 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0105 11:08:47.382225 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHHostname I0105 11:08:47.382497 30321 main.go:134] libmachine: (cluster-unfixed) DBG | executing: /Applications/VMware Fusion.app/Contents/Public/vmrun list I0105 11:08:47.382931 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:08:47.383492 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:08:47.383665 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:08:47.383676 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:08:47.383799 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:08:47.384058 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:08:47.384221 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:08:47.384625 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:08:47.384701 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:08:47.384971 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:08:47.385127 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:08:47.385278 30321 sshutil.go:53] new ssh client: &{IP:192.168.252.129 Port:22 SSHKeyPath:/Volumes/data/.minikube/machines/cluster-unfixed/id_rsa Username:docker} I0105 11:08:47.465334 30321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0105 11:08:47.682064 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Total running VMs: 1 I0105 11:08:47.682091 30321 main.go:134] libmachine: (cluster-unfixed) DBG | /Volumes/data/.minikube/machines/cluster-unfixed/cluster-unfixed.vmx I0105 11:08:47.685769 30321 main.go:134] libmachine: (cluster-unfixed) DBG | MAC address in VMX: 00:0c:29:91:0a:45 I0105 11:08:47.686099 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf I0105 11:08:47.686214 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:01:192.168.20.1] I0105 11:08:47.686226 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf I0105 11:08:47.686338 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Following IPs found map[00:50:56:c0:00:08:192.168.252.1] I0105 11:08:47.686423 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases I0105 11:08:47.686538 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet8.leases I0105 11:08:47.686709 30321 main.go:134] libmachine: (cluster-unfixed) DBG | IP found in DHCP lease table: 192.168.252.129 I0105 11:08:47.686832 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHPort I0105 11:08:47.687051 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHKeyPath I0105 11:08:47.687230 30321 main.go:134] libmachine: (cluster-unfixed) Calling .GetSSHUsername I0105 11:08:47.687416 30321 sshutil.go:53] new ssh client: &{IP:192.168.252.129 Port:22 SSHKeyPath:/Volumes/data/.minikube/machines/cluster-unfixed/id_rsa Username:docker} I0105 11:08:47.808786 30321 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0105 11:08:48.016056 30321 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.252.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.305804785s) I0105 11:08:48.016073 30321 start.go:826] {"host.minikube.internal": 192.168.252.1} host record injected into CoreDNS I0105 11:08:48.016076 30321 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.304028329s) I0105 11:08:48.016091 30321 api_server.go:71] duration metric: took 1.407097236s to wait for apiserver process to appear ... I0105 11:08:48.016105 30321 api_server.go:87] waiting for apiserver healthz status ... I0105 11:08:48.016125 30321 api_server.go:252] Checking apiserver healthz at https://192.168.252.129:8443/healthz ... I0105 11:08:48.016169 30321 main.go:134] libmachine: Making call to close driver server I0105 11:08:48.016178 30321 main.go:134] libmachine: (cluster-unfixed) Calling .Close I0105 11:08:48.016408 30321 main.go:134] libmachine: Successfully made call to close driver server I0105 11:08:48.016423 30321 main.go:134] libmachine: Making call to close connection to plugin binary I0105 11:08:48.016434 30321 main.go:134] libmachine: Making call to close driver server I0105 11:08:48.016457 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Closing plugin on server side I0105 11:08:48.016504 30321 main.go:134] libmachine: (cluster-unfixed) Calling .Close I0105 11:08:48.016760 30321 main.go:134] libmachine: Successfully made call to close driver server I0105 11:08:48.016767 30321 main.go:134] libmachine: Making call to close connection to plugin binary I0105 11:08:48.022917 30321 api_server.go:278] https://192.168.252.129:8443/healthz returned 200: ok I0105 11:08:48.024077 30321 api_server.go:140] control plane version: v1.25.3 I0105 11:08:48.024088 30321 api_server.go:130] duration metric: took 7.980128ms to wait for apiserver health ... I0105 11:08:48.024098 30321 system_pods.go:43] waiting for kube-system pods to appear ... I0105 11:08:48.033565 30321 system_pods.go:59] 5 kube-system pods found I0105 11:08:48.033575 30321 system_pods.go:61] "etcd-cluster-unfixed" [c3218fb1-ee9e-451d-82b9-f6696ea26e1e] Pending I0105 11:08:48.033579 30321 system_pods.go:61] "kube-apiserver-cluster-unfixed" [a6e0adc3-914b-4dbc-9788-4a606016aa31] Pending I0105 11:08:48.033584 30321 system_pods.go:61] "kube-controller-manager-cluster-unfixed" [d4e06c46-be28-47aa-9015-cf1ab88e88ed] Running I0105 11:08:48.033587 30321 system_pods.go:61] "kube-scheduler-cluster-unfixed" [e1561b2c-d998-4ec6-8d87-fe6ae37866bc] Pending I0105 11:08:48.033592 30321 system_pods.go:61] "storage-provisioner" [498fc817-30d2-438d-b2c8-181521eded6a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.) I0105 11:08:48.033595 30321 system_pods.go:74] duration metric: took 9.494359ms to wait for pod list to return data ... I0105 11:08:48.033601 30321 kubeadm.go:573] duration metric: took 1.42460861s to wait for : map[apiserver:true system_pods:true] ... I0105 11:08:48.033609 30321 node_conditions.go:102] verifying NodePressure condition ... I0105 11:08:48.038951 30321 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki I0105 11:08:48.038966 30321 node_conditions.go:123] node cpu capacity is 2 I0105 11:08:48.038977 30321 node_conditions.go:105] duration metric: took 5.363615ms to run NodePressure ... I0105 11:08:48.039002 30321 start.go:217] waiting for startup goroutines ... I0105 11:08:48.186689 30321 main.go:134] libmachine: Making call to close driver server I0105 11:08:48.186706 30321 main.go:134] libmachine: (cluster-unfixed) Calling .Close I0105 11:08:48.186949 30321 main.go:134] libmachine: Successfully made call to close driver server I0105 11:08:48.186957 30321 main.go:134] libmachine: Making call to close connection to plugin binary I0105 11:08:48.186963 30321 main.go:134] libmachine: Making call to close driver server I0105 11:08:48.186967 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Closing plugin on server side I0105 11:08:48.186969 30321 main.go:134] libmachine: (cluster-unfixed) Calling .Close I0105 11:08:48.187215 30321 main.go:134] libmachine: Successfully made call to close driver server I0105 11:08:48.187223 30321 main.go:134] libmachine: Making call to close connection to plugin binary I0105 11:08:48.187240 30321 main.go:134] libmachine: Making call to close driver server I0105 11:08:48.187242 30321 main.go:134] libmachine: (cluster-unfixed) DBG | Closing plugin on server side I0105 11:08:48.187246 30321 main.go:134] libmachine: (cluster-unfixed) Calling .Close I0105 11:08:48.187458 30321 main.go:134] libmachine: Successfully made call to close driver server I0105 11:08:48.187466 30321 main.go:134] libmachine: Making call to close connection to plugin binary I0105 11:08:48.193502 30321 out.go:177] ๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass I0105 11:08:48.197127 30321 addons.go:488] enableAddons completed in 1.588139011s I0105 11:08:48.198226 30321 ssh_runner.go:195] Run: rm -f paused I0105 11:08:48.479345 30321 start.go:506] kubectl: 1.23.15, cluster: 1.25.3 (minor skew: 2) I0105 11:08:48.481520 30321 out.go:177] W0105 11:08:48.485801 30321 out.go:239] โ— /Users/rogermm/.asdf/shims/kubectl is version 1.23.15, which may have incompatibilities with Kubernetes 1.25.3. I0105 11:08:48.489915 30321 out.go:177] โ–ช Want kubectl v1.25.3? Try 'minikube kubectl -- get pods -A' I0105 11:08:48.498217 30321 out.go:177] ๐Ÿ„ Done! kubectl is now configured to use "cluster-unfixed" cluster and "default" namespace by default * * ==> Docker <== * -- Journal begins at Thu 2023-01-05 11:07:38 UTC, ends at Thu 2023-01-05 19:10:34 UTC. -- Jan 05 19:08:37 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:37.668678749Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ca0acf1c7075afc2032c1328769406095ce79dd6ba477c3d329b987aabd76d9d pid=1881 runtime=io.containerd.runc.v2 Jan 05 19:08:38 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:38.340583208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:08:38 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:38.340849577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:08:38 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:38.343415233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:08:38 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:38.344190050Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/969a1b7175cc30e198b9af9030d178c9c0eb771d7ed1e60967f5dc625a98fb30 pid=1944 runtime=io.containerd.runc.v2 Jan 05 19:08:38 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:38.513787702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:08:38 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:38.513853150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:08:38 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:38.513866460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:08:38 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:38.514024145Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4f44dfa79334e1343b6e68665b2f285b756930005b858d88a1b4f95eb93ac5c3 pid=1985 runtime=io.containerd.runc.v2 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.488025793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.488650123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.488821949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.491060538Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/31aecfd858f71a78606eea89b53ee7ae246222e5c8a6b998444aaff6ffefaac2 pid=2419 runtime=io.containerd.runc.v2 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.910724463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.910896679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.910989930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.911309488Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/bf2023335a3addc0c782533bc525ffd3befdcb000e4e56a92d574b5a8b2f9f33 pid=2462 runtime=io.containerd.runc.v2 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.951367091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.951912306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.952181557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:08:59 cluster-unfixed dockerd[1085]: time="2023-01-05T19:08:59.953884232Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/00cd26077a20dab283b828bc139146b60b5673c2f08eb29bcc46c83057f942c7 pid=2486 runtime=io.containerd.runc.v2 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.300872456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.302922193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.303691502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.305905929Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1ffb8497230b9bfbd569927fdb8f0850342b2434a43c6a1773ae799257a0c36e pid=2562 runtime=io.containerd.runc.v2 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.371930035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.372240840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.372255108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.372953314Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cb3015f51c5289739ecab126439e3f1bea663734ea8b9ff380d0080146c80eef pid=2585 runtime=io.containerd.runc.v2 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.384850585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.384926137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.384937321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.401902463Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f63e11a070a2f7a029802a9f25c3b0b44c1f6562ce10ac136c0b3ee42fa11c7d pid=2601 runtime=io.containerd.runc.v2 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.949737526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.949807920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.949820677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:09:00 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:00.961329146Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4e3aa9906752e8a3a8b6984cb05e0cd994ee7058a36d59204bf1fb4847c513c7 pid=2729 runtime=io.containerd.runc.v2 Jan 05 19:09:01 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:01.051890908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:09:01 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:01.052205505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:09:01 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:01.052467950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:09:01 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:01.055551356Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/90fad892406232b3962223a3c98d420151ff611315e8ada7cda9c60b8f6ebe1a pid=2792 runtime=io.containerd.runc.v2 Jan 05 19:09:02 cluster-unfixed dockerd[1078]: time="2023-01-05T19:09:02.288751741Z" level=warning msg="reference for unknown type: " digest="sha256:83bb78d7b28f1ac99c68133af32c93e9a1c149bcd3cb6e683a3ee56e312f1c96" remote="docker.io/library/registry@sha256:83bb78d7b28f1ac99c68133af32c93e9a1c149bcd3cb6e683a3ee56e312f1c96" Jan 05 19:09:06 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:06.141888628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:09:06 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:06.142150957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:09:06 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:06.142185222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:09:06 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:06.142620262Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/589009452e0c231de6e16cc2aa51a0e7db18e3223e2a33ff2cfac76882ad761d pid=3006 runtime=io.containerd.runc.v2 Jan 05 19:09:12 cluster-unfixed dockerd[1078]: time="2023-01-05T19:09:12.593320744Z" level=warning msg="reference for unknown type: " digest="sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da" remote="gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da" Jan 05 19:09:24 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:24.006140509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:09:24 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:24.006226975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:09:24 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:24.006241551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:09:24 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:24.007287087Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3fff08aae79a7707041d3153222f348cf000fba61c3171236101f174b0bc4af7 pid=3268 runtime=io.containerd.runc.v2 Jan 05 19:09:30 cluster-unfixed dockerd[1078]: time="2023-01-05T19:09:30.773224772Z" level=info msg="ignoring event" container=cb3015f51c5289739ecab126439e3f1bea663734ea8b9ff380d0080146c80eef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 05 19:09:30 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:30.778718934Z" level=info msg="shim disconnected" id=cb3015f51c5289739ecab126439e3f1bea663734ea8b9ff380d0080146c80eef Jan 05 19:09:30 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:30.786108680Z" level=warning msg="cleaning up after shim disconnected" id=cb3015f51c5289739ecab126439e3f1bea663734ea8b9ff380d0080146c80eef namespace=moby Jan 05 19:09:30 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:30.786299963Z" level=info msg="cleaning up dead shim" Jan 05 19:09:30 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:30.800842753Z" level=warning msg="cleanup warnings time=\"2023-01-05T19:09:30Z\" level=info msg=\"starting signal loop\" namespace=moby pid=3327 runtime=io.containerd.runc.v2\n" Jan 05 19:09:31 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:31.235120967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 05 19:09:31 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:31.235381205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 05 19:09:31 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:31.235573293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 05 19:09:31 cluster-unfixed dockerd[1085]: time="2023-01-05T19:09:31.236290658Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2f8e753e90248327ff363d8b0e5ec26237934e9cbfa42cdee53f20d19fce59e7 pid=3350 runtime=io.containerd.runc.v2 * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 2f8e753e90248 6e38f40d628db About a minute ago Running storage-provisioner 1 31aecfd858f71 3fff08aae79a7 gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da About a minute ago Running registry-proxy 0 4e3aa9906752e 589009452e0c2 registry@sha256:83bb78d7b28f1ac99c68133af32c93e9a1c149bcd3cb6e683a3ee56e312f1c96 About a minute ago Running registry 0 f63e11a070a2f 90fad89240623 5185b96f0becf About a minute ago Running coredns 0 bf2023335a3ad cb3015f51c528 6e38f40d628db About a minute ago Exited storage-provisioner 0 31aecfd858f71 1ffb8497230b9 beaaf00edd38a About a minute ago Running kube-proxy 0 00cd26077a20d 4f44dfa79334e 0346dbd74bcb9 About a minute ago Running kube-apiserver 0 ca0acf1c7075a 969a1b7175cc3 6d23ec0e8b87e About a minute ago Running kube-scheduler 0 b9f81eb2e71c0 2e696957c6f24 a8a176a5d5d69 About a minute ago Running etcd 0 254a48c236d41 d4c8aafd72e92 6039992312758 About a minute ago Running kube-controller-manager 0 0c7a46348b2a2 * * ==> coredns [90fad8924062] <== * .:53 [INFO] plugin/reload: Running configuration SHA512 = fb3c054b7ea7c5d42a69586fafce938477b4f846ae97697e60f205aa48374b687eddcc25a1b42c06abbea5f4b4d7a85915f576c8c0c3c63ea06cb1ae695d1694 CoreDNS-1.9.3 linux/amd64, go1.18.2, 45b0a11 [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:55109->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:47863->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:44619->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:45224->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:40848->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:43300->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:59828->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:41104->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:44127->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:52441->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:59153->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:42144->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:33753->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:39192->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:53439->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:46689->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:51154->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:54551->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:42364->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:59505->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:33352->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:44633->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:36195->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:35119->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:45955->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:54361->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:60211->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:36876->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:36667->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:60311->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:52369->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:48562->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. AAAA: read udp 172.17.0.2:32770->192.168.252.2:53: i/o timeout [ERROR] plugin/errors: 2 registry.kube-system.svc.cluster.local. A: read udp 172.17.0.2:46737->192.168.252.2:53: i/o timeout * * ==> describe nodes <== * Name: cluster-unfixed Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=cluster-unfixed kubernetes.io/os=linux minikube.k8s.io/commit=986b1ebd987211ed16f8cc10aed7d2c42fc8392f minikube.k8s.io/name=cluster-unfixed minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2023_01_05T11_08_45_0700 minikube.k8s.io/version=v1.28.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 05 Jan 2023 19:08:43 +0000 Taints: Unschedulable: false Lease: HolderIdentity: cluster-unfixed AcquireTime: RenewTime: Thu, 05 Jan 2023 19:10:28 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 05 Jan 2023 19:09:47 +0000 Thu, 05 Jan 2023 19:08:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 05 Jan 2023 19:09:47 +0000 Thu, 05 Jan 2023 19:08:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 05 Jan 2023 19:09:47 +0000 Thu, 05 Jan 2023 19:08:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 05 Jan 2023 19:09:47 +0000 Thu, 05 Jan 2023 19:08:56 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.252.129 Hostname: cluster-unfixed Capacity: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 5925660Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 5925660Ki pods: 110 System Info: Machine ID: 078a3312a1424150866ffda505e20945 System UUID: 564d21e9-33f3-847f-1502-d8f418910a45 Boot ID: 90b067cc-9be6-4696-a155-6313654aa1c0 Kernel Version: 5.10.57 OS Image: Buildroot 2021.02.12 Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.20 Kubelet Version: v1.25.3 Kube-Proxy Version: v1.25.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-565d847f94-zdpqj 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (2%!)(MISSING) 95s kube-system etcd-cluster-unfixed 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 109s kube-system kube-apiserver-cluster-unfixed 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 109s kube-system kube-controller-manager-cluster-unfixed 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 108s kube-system kube-proxy-jfgks 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 96s kube-system kube-scheduler-cluster-unfixed 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 108s kube-system registry-4bhrq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 95s kube-system registry-proxy-qxjcn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 95s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 106s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (2%!)(MISSING) 170Mi (2%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 93s kube-proxy Normal NodeHasSufficientMemory 2m8s (x5 over 2m8s) kubelet Node cluster-unfixed status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m8s (x4 over 2m8s) kubelet Node cluster-unfixed status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m8s (x4 over 2m8s) kubelet Node cluster-unfixed status is now: NodeHasSufficientPID Normal Starting 108s kubelet Starting kubelet. Normal NodeHasSufficientMemory 108s kubelet Node cluster-unfixed status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 108s kubelet Node cluster-unfixed status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 108s kubelet Node cluster-unfixed status is now: NodeHasSufficientPID Normal NodeNotReady 108s kubelet Node cluster-unfixed status is now: NodeNotReady Normal NodeAllocatableEnforced 108s kubelet Updated Node Allocatable limit across pods Normal NodeReady 98s kubelet Node cluster-unfixed status is now: NodeReady Normal RegisteredNode 96s node-controller Node cluster-unfixed event: Registered Node cluster-unfixed in Controller * * ==> dmesg <== * [Jan 5 19:07] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.184169] core: CPUID marked event: 'cpu cycles' unavailable [ +0.000000] core: CPUID marked event: 'instructions' unavailable [ +0.000001] core: CPUID marked event: 'bus cycles' unavailable [ +0.000000] core: CPUID marked event: 'cache references' unavailable [ +0.000001] core: CPUID marked event: 'cache misses' unavailable [ +0.000000] core: CPUID marked event: 'branch instructions' unavailable [ +0.000000] core: CPUID marked event: 'branch misses' unavailable [ +0.008207] pmd_set_huge: Cannot satisfy [mem 0xf0000000-0xf0200000] with a huge-page mapping due to MTRR override. [ +5.466990] sd 0:0:0:0: [sda] Assuming drive cache: write through [ +0.084377] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +3.269649] systemd-fstab-generator[206]: Ignoring "noauto" for root device [ +0.111204] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. [ +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.) [ +1.802562] Unknown ioctl 1976 [ +0.000224] Unknown ioctl 1976 [ +0.058223] Unknown ioctl 1976 [ +0.000552] Unknown ioctl 1976 [ +1.295213] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [ +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery [ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2) [ +10.511893] systemd-fstab-generator[708]: Ignoring "noauto" for root device [ +0.120879] systemd-fstab-generator[719]: Ignoring "noauto" for root device [Jan 5 19:08] systemd-fstab-generator[887]: Ignoring "noauto" for root device [ +1.588760] kauditd_printk_skb: 14 callbacks suppressed [ +0.433579] systemd-fstab-generator[1047]: Ignoring "noauto" for root device [ +0.152287] systemd-fstab-generator[1058]: Ignoring "noauto" for root device [ +0.137192] systemd-fstab-generator[1069]: Ignoring "noauto" for root device [ +1.542100] systemd-fstab-generator[1219]: Ignoring "noauto" for root device [ +0.137508] systemd-fstab-generator[1230]: Ignoring "noauto" for root device [ +7.181065] systemd-fstab-generator[1483]: Ignoring "noauto" for root device [ +0.593285] kauditd_printk_skb: 68 callbacks suppressed [ +19.198335] systemd-fstab-generator[2156]: Ignoring "noauto" for root device [ +14.035163] kauditd_printk_skb: 8 callbacks suppressed [Jan 5 19:09] kauditd_printk_skb: 22 callbacks suppressed * * ==> etcd [2e696957c6f2] <== * {"level":"info","ts":"2023-01-05T19:08:38.816Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.252.129:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.252.129:2380","--initial-cluster=cluster-unfixed=https://192.168.252.129:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.252.129:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.252.129:2380","--name=cluster-unfixed","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2023-01-05T19:08:38.817Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.252.129:2380"]} {"level":"info","ts":"2023-01-05T19:08:38.818Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-01-05T19:08:38.822Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.252.129:2379"]} {"level":"info","ts":"2023-01-05T19:08:38.823Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"08407ff76","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"cluster-unfixed","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.252.129:2380"],"listen-peer-urls":["https://192.168.252.129:2380"],"advertise-client-urls":["https://192.168.252.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.252.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"cluster-unfixed=https://192.168.252.129:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-01-05T19:08:38.830Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"4.005358ms"} {"level":"info","ts":"2023-01-05T19:08:38.882Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"287e30fa48fdfa4f","cluster-id":"5fe8061e19c3269a"} {"level":"info","ts":"2023-01-05T19:08:38.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f switched to configuration voters=()"} {"level":"info","ts":"2023-01-05T19:08:38.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f became follower at term 0"} {"level":"info","ts":"2023-01-05T19:08:38.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 287e30fa48fdfa4f [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-01-05T19:08:38.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f became follower at term 1"} {"level":"info","ts":"2023-01-05T19:08:38.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f switched to configuration voters=(2917823460107221583)"} {"level":"warn","ts":"2023-01-05T19:08:38.886Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-01-05T19:08:38.890Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-01-05T19:08:38.893Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-01-05T19:08:38.897Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"287e30fa48fdfa4f","local-server-version":"3.5.4","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-01-05T19:08:38.906Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"287e30fa48fdfa4f","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-01-05T19:08:38.909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f switched to configuration voters=(2917823460107221583)"} {"level":"info","ts":"2023-01-05T19:08:38.909Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5fe8061e19c3269a","local-member-id":"287e30fa48fdfa4f","added-peer-id":"287e30fa48fdfa4f","added-peer-peer-urls":["https://192.168.252.129:2380"]} {"level":"info","ts":"2023-01-05T19:08:38.912Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-01-05T19:08:38.913Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"287e30fa48fdfa4f","initial-advertise-peer-urls":["https://192.168.252.129:2380"],"listen-peer-urls":["https://192.168.252.129:2380"],"advertise-client-urls":["https://192.168.252.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.252.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2023-01-05T19:08:38.913Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2023-01-05T19:08:38.913Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.252.129:2380"} {"level":"info","ts":"2023-01-05T19:08:38.913Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.252.129:2380"} {"level":"info","ts":"2023-01-05T19:08:39.083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f is starting a new election at term 1"} {"level":"info","ts":"2023-01-05T19:08:39.083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f became pre-candidate at term 1"} {"level":"info","ts":"2023-01-05T19:08:39.083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f received MsgPreVoteResp from 287e30fa48fdfa4f at term 1"} {"level":"info","ts":"2023-01-05T19:08:39.083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f became candidate at term 2"} {"level":"info","ts":"2023-01-05T19:08:39.083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f received MsgVoteResp from 287e30fa48fdfa4f at term 2"} {"level":"info","ts":"2023-01-05T19:08:39.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"287e30fa48fdfa4f became leader at term 2"} {"level":"info","ts":"2023-01-05T19:08:39.084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 287e30fa48fdfa4f elected leader 287e30fa48fdfa4f at term 2"} {"level":"info","ts":"2023-01-05T19:08:39.085Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"287e30fa48fdfa4f","local-member-attributes":"{Name:cluster-unfixed ClientURLs:[https://192.168.252.129:2379]}","request-path":"/0/members/287e30fa48fdfa4f/attributes","cluster-id":"5fe8061e19c3269a","publish-timeout":"7s"} {"level":"info","ts":"2023-01-05T19:08:39.086Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2023-01-05T19:08:39.096Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.252.129:2379"} {"level":"info","ts":"2023-01-05T19:08:39.096Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-01-05T19:08:39.096Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2023-01-05T19:08:39.099Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} {"level":"info","ts":"2023-01-05T19:08:39.103Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5fe8061e19c3269a","local-member-id":"287e30fa48fdfa4f","cluster-version":"3.5"} {"level":"info","ts":"2023-01-05T19:08:39.111Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-01-05T19:08:39.111Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2023-01-05T19:08:39.104Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-01-05T19:08:39.111Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} * * ==> kernel <== * 19:10:35 up 3 min, 0 users, load average: 0.74, 0.48, 0.19 Linux cluster-unfixed 5.10.57 #1 SMP Fri Oct 28 21:02:11 UTC 2022 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2021.02.12" * * ==> kube-apiserver [4f44dfa79334] <== * W0105 19:08:41.159733 1 genericapiserver.go:656] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. W0105 19:08:41.161538 1 genericapiserver.go:656] Skipping API events.k8s.io/v1beta1 because it has no resources. I0105 19:08:41.162625 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0105 19:08:41.162894 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0105 19:08:41.186932 1 genericapiserver.go:656] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0105 19:08:42.877754 1 secure_serving.go:210] Serving securely on [::]:8443 I0105 19:08:42.878231 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0105 19:08:42.892564 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0105 19:08:42.898281 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0105 19:08:42.904910 1 apf_controller.go:300] Starting API Priority and Fairness config controller I0105 19:08:42.906775 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0105 19:08:42.906902 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0105 19:08:42.906960 1 available_controller.go:491] Starting AvailableConditionController I0105 19:08:42.907109 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0105 19:08:42.907302 1 controller.go:80] Starting OpenAPI V3 AggregationController I0105 19:08:42.907640 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0105 19:08:42.910672 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0105 19:08:42.910873 1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller I0105 19:08:42.911870 1 controller.go:83] Starting OpenAPI AggregationController I0105 19:08:42.912412 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0105 19:08:42.913465 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0105 19:08:42.914522 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0105 19:08:42.915154 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0105 19:08:42.915391 1 autoregister_controller.go:141] Starting autoregister controller I0105 19:08:42.915606 1 cache.go:32] Waiting for caches to sync for autoregister controller I0105 19:08:42.922585 1 controller.go:85] Starting OpenAPI controller I0105 19:08:42.922647 1 controller.go:85] Starting OpenAPI V3 controller I0105 19:08:42.922668 1 naming_controller.go:291] Starting NamingConditionController I0105 19:08:42.922854 1 establishing_controller.go:76] Starting EstablishingController I0105 19:08:42.923080 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0105 19:08:42.923244 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0105 19:08:42.923452 1 crd_finalizer.go:266] Starting CRDFinalizer I0105 19:08:42.971626 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0105 19:08:42.971741 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister I0105 19:08:43.103052 1 controller.go:616] quota admission added evaluator for: namespaces I0105 19:08:43.105511 1 apf_controller.go:305] Running API Priority and Fairness config worker I0105 19:08:43.107585 1 cache.go:39] Caches are synced for AvailableConditionController controller I0105 19:08:43.110678 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0105 19:08:43.112883 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller I0105 19:08:43.115677 1 cache.go:39] Caches are synced for autoregister controller I0105 19:08:43.132811 1 shared_informer.go:262] Caches are synced for node_authorizer I0105 19:08:43.175284 1 shared_informer.go:262] Caches are synced for crd-autoregister I0105 19:08:43.584886 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0105 19:08:43.915206 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0105 19:08:43.920904 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0105 19:08:43.921097 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0105 19:08:44.472566 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0105 19:08:44.527128 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0105 19:08:44.625150 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0105 19:08:44.633501 1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.252.129] I0105 19:08:44.635920 1 controller.go:616] quota admission added evaluator for: endpoints I0105 19:08:44.641831 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io I0105 19:08:45.150856 1 controller.go:616] quota admission added evaluator for: serviceaccounts I0105 19:08:46.048909 1 controller.go:616] quota admission added evaluator for: deployments.apps I0105 19:08:46.060328 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0105 19:08:46.070690 1 controller.go:616] quota admission added evaluator for: daemonsets.apps I0105 19:08:46.369617 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io I0105 19:08:58.847512 1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps I0105 19:08:59.150944 1 controller.go:616] quota admission added evaluator for: replicasets.apps I0105 19:08:59.614067 1 alloc.go:327] "allocated clusterIPs" service="kube-system/registry" clusterIPs=map[IPv4:10.104.197.28] * * ==> kube-controller-manager [d4c8aafd72e9] <== * I0105 19:08:58.093740 1 shared_informer.go:255] Waiting for caches to sync for token_cleaner I0105 19:08:58.093750 1 shared_informer.go:262] Caches are synced for token_cleaner E0105 19:08:58.243228 1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W0105 19:08:58.243265 1 controllermanager.go:581] Skipping "service" I0105 19:08:58.266740 1 shared_informer.go:255] Waiting for caches to sync for resource quota W0105 19:08:58.271495 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="cluster-unfixed" does not exist I0105 19:08:58.283121 1 shared_informer.go:262] Caches are synced for namespace I0105 19:08:58.285533 1 shared_informer.go:262] Caches are synced for TTL I0105 19:08:58.294088 1 shared_informer.go:262] Caches are synced for ReplicationController I0105 19:08:58.296205 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring I0105 19:08:58.296615 1 shared_informer.go:262] Caches are synced for PVC protection I0105 19:08:58.308567 1 shared_informer.go:262] Caches are synced for daemon sets I0105 19:08:58.311337 1 shared_informer.go:262] Caches are synced for disruption I0105 19:08:58.311867 1 shared_informer.go:262] Caches are synced for certificate-csrapproving I0105 19:08:58.320379 1 shared_informer.go:255] Waiting for caches to sync for garbage collector I0105 19:08:58.329028 1 shared_informer.go:262] Caches are synced for stateful set I0105 19:08:58.342476 1 shared_informer.go:262] Caches are synced for cronjob I0105 19:08:58.342881 1 shared_informer.go:262] Caches are synced for service account I0105 19:08:58.343318 1 shared_informer.go:262] Caches are synced for HPA I0105 19:08:58.343970 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator I0105 19:08:58.344313 1 shared_informer.go:262] Caches are synced for ephemeral I0105 19:08:58.346204 1 shared_informer.go:262] Caches are synced for ReplicaSet I0105 19:08:58.346257 1 shared_informer.go:262] Caches are synced for endpoint I0105 19:08:58.353608 1 shared_informer.go:262] Caches are synced for GC I0105 19:08:58.354892 1 shared_informer.go:262] Caches are synced for TTL after finished I0105 19:08:58.362076 1 shared_informer.go:262] Caches are synced for endpoint_slice I0105 19:08:58.363541 1 shared_informer.go:262] Caches are synced for node I0105 19:08:58.363845 1 range_allocator.go:166] Starting range CIDR allocator I0105 19:08:58.364040 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator I0105 19:08:58.364320 1 shared_informer.go:262] Caches are synced for cidrallocator I0105 19:08:58.368501 1 shared_informer.go:262] Caches are synced for taint I0105 19:08:58.368945 1 taint_manager.go:204] "Starting NoExecuteTaintManager" I0105 19:08:58.369399 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving I0105 19:08:58.369711 1 taint_manager.go:209] "Sending events to api server" I0105 19:08:58.370294 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: I0105 19:08:58.370586 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client W0105 19:08:58.370368 1 node_lifecycle_controller.go:1058] Missing timestamp for Node cluster-unfixed. Assuming now as a timestamp. I0105 19:08:58.371032 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal. I0105 19:08:58.371773 1 event.go:294] "Event occurred" object="cluster-unfixed" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node cluster-unfixed event: Registered Node cluster-unfixed in Controller" I0105 19:08:58.372236 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown I0105 19:08:58.372301 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client I0105 19:08:58.380050 1 shared_informer.go:262] Caches are synced for persistent volume I0105 19:08:58.390379 1 range_allocator.go:367] Set node cluster-unfixed PodCIDR to [10.244.0.0/24] I0105 19:08:58.393290 1 shared_informer.go:262] Caches are synced for job I0105 19:08:58.393594 1 shared_informer.go:262] Caches are synced for expand I0105 19:08:58.393808 1 shared_informer.go:262] Caches are synced for deployment I0105 19:08:58.393922 1 shared_informer.go:262] Caches are synced for attach detach I0105 19:08:58.421140 1 shared_informer.go:262] Caches are synced for PV protection I0105 19:08:58.431580 1 shared_informer.go:262] Caches are synced for crt configmap I0105 19:08:58.443585 1 shared_informer.go:262] Caches are synced for bootstrap_signer I0105 19:08:58.459813 1 shared_informer.go:262] Caches are synced for resource quota I0105 19:08:58.467030 1 shared_informer.go:262] Caches are synced for resource quota I0105 19:08:58.856392 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jfgks" I0105 19:08:58.921298 1 shared_informer.go:262] Caches are synced for garbage collector I0105 19:08:58.943179 1 shared_informer.go:262] Caches are synced for garbage collector I0105 19:08:58.943199 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0105 19:08:59.154123 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 1" I0105 19:08:59.359243 1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-zdpqj" I0105 19:08:59.620941 1 event.go:294] "Event occurred" object="kube-system/registry" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-4bhrq" I0105 19:08:59.675321 1 event.go:294] "Event occurred" object="kube-system/registry-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-proxy-qxjcn" * * ==> kube-proxy [1ffb8497230b] <== * I0105 19:09:00.910747 1 node.go:163] Successfully retrieved node IP: 192.168.252.129 I0105 19:09:00.910851 1 server_others.go:138] "Detected node IP" address="192.168.252.129" I0105 19:09:00.910876 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0105 19:09:00.982844 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6 I0105 19:09:00.982879 1 server_others.go:206] "Using iptables Proxier" I0105 19:09:00.982911 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0105 19:09:00.984061 1 server.go:661] "Version info" version="v1.25.3" I0105 19:09:00.984074 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0105 19:09:00.985214 1 config.go:317] "Starting service config controller" I0105 19:09:00.985224 1 shared_informer.go:255] Waiting for caches to sync for service config I0105 19:09:00.985243 1 config.go:226] "Starting endpoint slice config controller" I0105 19:09:00.985247 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config I0105 19:09:00.985749 1 config.go:444] "Starting node config controller" I0105 19:09:00.985756 1 shared_informer.go:255] Waiting for caches to sync for node config I0105 19:09:01.086084 1 shared_informer.go:262] Caches are synced for node config I0105 19:09:01.086108 1 shared_informer.go:262] Caches are synced for service config I0105 19:09:01.086129 1 shared_informer.go:262] Caches are synced for endpoint slice config * * ==> kube-scheduler [969a1b7175cc] <== * I0105 19:08:39.906681 1 serving.go:348] Generated self-signed cert in-memory W0105 19:08:42.980527 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0105 19:08:42.980945 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0105 19:08:42.981251 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous. W0105 19:08:42.981280 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0105 19:08:43.061275 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3" I0105 19:08:43.061311 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0105 19:08:43.062788 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259 I0105 19:08:43.063311 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0105 19:08:43.063344 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0105 19:08:43.063542 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0105 19:08:43.074844 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0105 19:08:43.075280 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0105 19:08:43.079136 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0105 19:08:43.079929 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0105 19:08:43.079251 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0105 19:08:43.080130 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0105 19:08:43.079337 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0105 19:08:43.080147 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0105 19:08:43.079388 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0105 19:08:43.080346 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0105 19:08:43.079431 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0105 19:08:43.080388 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0105 19:08:43.079488 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0105 19:08:43.082551 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0105 19:08:43.079524 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0105 19:08:43.079579 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0105 19:08:43.079628 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0105 19:08:43.079666 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0105 19:08:43.079698 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0105 19:08:43.079733 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0105 19:08:43.082883 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0105 19:08:43.082891 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0105 19:08:43.082896 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0105 19:08:43.082901 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0105 19:08:43.083201 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0105 19:08:43.083210 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0105 19:08:43.086277 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0105 19:08:43.086890 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0105 19:08:43.086915 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0105 19:08:43.087236 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0105 19:08:43.916536 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0105 19:08:43.916562 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0105 19:08:43.929394 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0105 19:08:43.929432 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0105 19:08:43.943084 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0105 19:08:43.943186 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0105 19:08:43.977872 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0105 19:08:43.978139 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0105 19:08:43.983279 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0105 19:08:43.983364 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0105 19:08:44.119827 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0105 19:08:44.119864 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0105 19:08:44.137362 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0105 19:08:44.137402 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0105 19:08:44.176170 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0105 19:08:44.176207 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0105 19:08:44.248036 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0105 19:08:44.248071 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope I0105 19:08:47.264524 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Journal begins at Thu 2023-01-05 11:07:38 UTC, ends at Thu 2023-01-05 19:10:35 UTC. -- Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.582630 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.634038 2163 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Jan 05 19:08:46 cluster-unfixed kubelet[2163]: E0105 19:08:46.662666 2163 kubelet_network_linux.go:141] "Failed to ensure that KUBE-MARK-DROP chain exists" err=< Jan 05 19:08:46 cluster-unfixed kubelet[2163]: error creating chain "KUBE-MARK-DROP": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Jan 05 19:08:46 cluster-unfixed kubelet[2163]: Perhaps ip6tables or your kernel needs to be upgraded. Jan 05 19:08:46 cluster-unfixed kubelet[2163]: > Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.662687 2163 kubelet_network_linux.go:71] "Failed to initialize iptables rules; some functionality may be missing." protocol=IPv6 Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.662701 2163 status_manager.go:161] "Starting to sync pod status with apiserver" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.662716 2163 kubelet.go:2010] "Starting kubelet main sync loop" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: E0105 19:08:46.662754 2163 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.763487 2163 topology_manager.go:205] "Topology Admit Handler" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.764091 2163 topology_manager.go:205] "Topology Admit Handler" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.764216 2163 topology_manager.go:205] "Topology Admit Handler" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.764415 2163 topology_manager.go:205] "Topology Admit Handler" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: E0105 19:08:46.771214 2163 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"etcd-cluster-unfixed\" already exists" pod="kube-system/etcd-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: E0105 19:08:46.774461 2163 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cluster-unfixed\" already exists" pod="kube-system/kube-apiserver-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.852503 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcd75c2ff9e3b7e898b7f9f8390950cf-k8s-certs\") pod \"kube-controller-manager-cluster-unfixed\" (UID: \"bcd75c2ff9e3b7e898b7f9f8390950cf\") " pod="kube-system/kube-controller-manager-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.852621 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/68e463e37fe50ee3d1005ada50c7d4a6-etcd-certs\") pod \"etcd-cluster-unfixed\" (UID: \"68e463e37fe50ee3d1005ada50c7d4a6\") " pod="kube-system/etcd-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.852708 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/68e463e37fe50ee3d1005ada50c7d4a6-etcd-data\") pod \"etcd-cluster-unfixed\" (UID: \"68e463e37fe50ee3d1005ada50c7d4a6\") " pod="kube-system/etcd-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.852819 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81166cf53e87a585650a7577019e7f59-k8s-certs\") pod \"kube-apiserver-cluster-unfixed\" (UID: \"81166cf53e87a585650a7577019e7f59\") " pod="kube-system/kube-apiserver-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.852983 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81166cf53e87a585650a7577019e7f59-usr-share-ca-certificates\") pod \"kube-apiserver-cluster-unfixed\" (UID: \"81166cf53e87a585650a7577019e7f59\") " pod="kube-system/kube-apiserver-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.853215 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcd75c2ff9e3b7e898b7f9f8390950cf-ca-certs\") pod \"kube-controller-manager-cluster-unfixed\" (UID: \"bcd75c2ff9e3b7e898b7f9f8390950cf\") " pod="kube-system/kube-controller-manager-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.853365 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcd75c2ff9e3b7e898b7f9f8390950cf-usr-share-ca-certificates\") pod \"kube-controller-manager-cluster-unfixed\" (UID: \"bcd75c2ff9e3b7e898b7f9f8390950cf\") " pod="kube-system/kube-controller-manager-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.853447 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/336b0aa55d09629bb94af9a0701fc1af-kubeconfig\") pod \"kube-scheduler-cluster-unfixed\" (UID: \"336b0aa55d09629bb94af9a0701fc1af\") " pod="kube-system/kube-scheduler-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.853494 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81166cf53e87a585650a7577019e7f59-ca-certs\") pod \"kube-apiserver-cluster-unfixed\" (UID: \"81166cf53e87a585650a7577019e7f59\") " pod="kube-system/kube-apiserver-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.853516 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bcd75c2ff9e3b7e898b7f9f8390950cf-flexvolume-dir\") pod \"kube-controller-manager-cluster-unfixed\" (UID: \"bcd75c2ff9e3b7e898b7f9f8390950cf\") " pod="kube-system/kube-controller-manager-cluster-unfixed" Jan 05 19:08:46 cluster-unfixed kubelet[2163]: I0105 19:08:46.853535 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bcd75c2ff9e3b7e898b7f9f8390950cf-kubeconfig\") pod \"kube-controller-manager-cluster-unfixed\" (UID: \"bcd75c2ff9e3b7e898b7f9f8390950cf\") " pod="kube-system/kube-controller-manager-cluster-unfixed" Jan 05 19:08:47 cluster-unfixed kubelet[2163]: I0105 19:08:47.174653 2163 apiserver.go:52] "Watching apiserver" Jan 05 19:08:47 cluster-unfixed kubelet[2163]: I0105 19:08:47.458569 2163 reconciler.go:169] "Reconciler: start to sync state" Jan 05 19:08:47 cluster-unfixed kubelet[2163]: E0105 19:08:47.980244 2163 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-scheduler-cluster-unfixed\" already exists" pod="kube-system/kube-scheduler-cluster-unfixed" Jan 05 19:08:48 cluster-unfixed kubelet[2163]: E0105 19:08:48.180996 2163 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cluster-unfixed\" already exists" pod="kube-system/kube-apiserver-cluster-unfixed" Jan 05 19:08:48 cluster-unfixed kubelet[2163]: E0105 19:08:48.383781 2163 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-cluster-unfixed\" already exists" pod="kube-system/kube-controller-manager-cluster-unfixed" Jan 05 19:08:48 cluster-unfixed kubelet[2163]: E0105 19:08:48.580876 2163 kubelet.go:1712] "Failed creating a mirror pod for" err="pods \"etcd-cluster-unfixed\" already exists" pod="kube-system/etcd-cluster-unfixed" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.403368 2163 topology_manager.go:205] "Topology Admit Handler" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.424051 2163 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.426005 2163 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.531124 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcslv\" (UniqueName: \"kubernetes.io/projected/498fc817-30d2-438d-b2c8-181521eded6a-kube-api-access-zcslv\") pod \"storage-provisioner\" (UID: \"498fc817-30d2-438d-b2c8-181521eded6a\") " pod="kube-system/storage-provisioner" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.531192 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/498fc817-30d2-438d-b2c8-181521eded6a-tmp\") pod \"storage-provisioner\" (UID: \"498fc817-30d2-438d-b2c8-181521eded6a\") " pod="kube-system/storage-provisioner" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: E0105 19:08:58.638923 2163 projected.go:290] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 05 19:08:58 cluster-unfixed kubelet[2163]: E0105 19:08:58.639178 2163 projected.go:196] Error preparing data for projected volume kube-api-access-zcslv for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Jan 05 19:08:58 cluster-unfixed kubelet[2163]: E0105 19:08:58.639692 2163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/498fc817-30d2-438d-b2c8-181521eded6a-kube-api-access-zcslv podName:498fc817-30d2-438d-b2c8-181521eded6a nodeName:}" failed. No retries permitted until 2023-01-05 19:08:59.139390384 +0000 UTC m=+13.115470709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zcslv" (UniqueName: "kubernetes.io/projected/498fc817-30d2-438d-b2c8-181521eded6a-kube-api-access-zcslv") pod "storage-provisioner" (UID: "498fc817-30d2-438d-b2c8-181521eded6a") : configmap "kube-root-ca.crt" not found Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.868430 2163 topology_manager.go:205] "Topology Admit Handler" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.933273 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/687b6e80-8a05-4377-84fc-3fd0509cfe1c-lib-modules\") pod \"kube-proxy-jfgks\" (UID: \"687b6e80-8a05-4377-84fc-3fd0509cfe1c\") " pod="kube-system/kube-proxy-jfgks" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.933336 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/687b6e80-8a05-4377-84fc-3fd0509cfe1c-kube-proxy\") pod \"kube-proxy-jfgks\" (UID: \"687b6e80-8a05-4377-84fc-3fd0509cfe1c\") " pod="kube-system/kube-proxy-jfgks" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.933403 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nn6m\" (UniqueName: \"kubernetes.io/projected/687b6e80-8a05-4377-84fc-3fd0509cfe1c-kube-api-access-4nn6m\") pod \"kube-proxy-jfgks\" (UID: \"687b6e80-8a05-4377-84fc-3fd0509cfe1c\") " pod="kube-system/kube-proxy-jfgks" Jan 05 19:08:58 cluster-unfixed kubelet[2163]: I0105 19:08:58.933442 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/687b6e80-8a05-4377-84fc-3fd0509cfe1c-xtables-lock\") pod \"kube-proxy-jfgks\" (UID: \"687b6e80-8a05-4377-84fc-3fd0509cfe1c\") " pod="kube-system/kube-proxy-jfgks" Jan 05 19:08:59 cluster-unfixed kubelet[2163]: E0105 19:08:59.043316 2163 projected.go:290] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 05 19:08:59 cluster-unfixed kubelet[2163]: E0105 19:08:59.043342 2163 projected.go:196] Error preparing data for projected volume kube-api-access-4nn6m for pod kube-system/kube-proxy-jfgks: configmap "kube-root-ca.crt" not found Jan 05 19:08:59 cluster-unfixed kubelet[2163]: E0105 19:08:59.043386 2163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/687b6e80-8a05-4377-84fc-3fd0509cfe1c-kube-api-access-4nn6m podName:687b6e80-8a05-4377-84fc-3fd0509cfe1c nodeName:}" failed. No retries permitted until 2023-01-05 19:08:59.543371331 +0000 UTC m=+13.519451652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4nn6m" (UniqueName: "kubernetes.io/projected/687b6e80-8a05-4377-84fc-3fd0509cfe1c-kube-api-access-4nn6m") pod "kube-proxy-jfgks" (UID: "687b6e80-8a05-4377-84fc-3fd0509cfe1c") : configmap "kube-root-ca.crt" not found Jan 05 19:08:59 cluster-unfixed kubelet[2163]: I0105 19:08:59.371590 2163 topology_manager.go:205] "Topology Admit Handler" Jan 05 19:08:59 cluster-unfixed kubelet[2163]: I0105 19:08:59.436105 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wspzf\" (UniqueName: \"kubernetes.io/projected/b39446cf-2099-4f7f-8f69-c7366f939879-kube-api-access-wspzf\") pod \"coredns-565d847f94-zdpqj\" (UID: \"b39446cf-2099-4f7f-8f69-c7366f939879\") " pod="kube-system/coredns-565d847f94-zdpqj" Jan 05 19:08:59 cluster-unfixed kubelet[2163]: I0105 19:08:59.436173 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b39446cf-2099-4f7f-8f69-c7366f939879-config-volume\") pod \"coredns-565d847f94-zdpqj\" (UID: \"b39446cf-2099-4f7f-8f69-c7366f939879\") " pod="kube-system/coredns-565d847f94-zdpqj" Jan 05 19:08:59 cluster-unfixed kubelet[2163]: I0105 19:08:59.629878 2163 topology_manager.go:205] "Topology Admit Handler" Jan 05 19:08:59 cluster-unfixed kubelet[2163]: I0105 19:08:59.685613 2163 topology_manager.go:205] "Topology Admit Handler" Jan 05 19:08:59 cluster-unfixed kubelet[2163]: I0105 19:08:59.738977 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxcb\" (UniqueName: \"kubernetes.io/projected/d3655faf-41c7-49a6-a2ce-c9e003afd9dc-kube-api-access-5rxcb\") pod \"registry-4bhrq\" (UID: \"d3655faf-41c7-49a6-a2ce-c9e003afd9dc\") " pod="kube-system/registry-4bhrq" Jan 05 19:08:59 cluster-unfixed kubelet[2163]: I0105 19:08:59.739243 2163 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qx2j\" (UniqueName: \"kubernetes.io/projected/4709c921-910c-460f-8759-1bda386507a5-kube-api-access-9qx2j\") pod \"registry-proxy-qxjcn\" (UID: \"4709c921-910c-460f-8759-1bda386507a5\") " pod="kube-system/registry-proxy-qxjcn" Jan 05 19:09:00 cluster-unfixed kubelet[2163]: I0105 19:09:00.240558 2163 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="31aecfd858f71a78606eea89b53ee7ae246222e5c8a6b998444aaff6ffefaac2" Jan 05 19:09:01 cluster-unfixed kubelet[2163]: I0105 19:09:01.646205 2163 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f63e11a070a2f7a029802a9f25c3b0b44c1f6562ce10ac136c0b3ee42fa11c7d" Jan 05 19:09:01 cluster-unfixed kubelet[2163]: I0105 19:09:01.754385 2163 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4e3aa9906752e8a3a8b6984cb05e0cd994ee7058a36d59204bf1fb4847c513c7" Jan 05 19:09:31 cluster-unfixed kubelet[2163]: I0105 19:09:31.156981 2163 scope.go:115] "RemoveContainer" containerID="cb3015f51c5289739ecab126439e3f1bea663734ea8b9ff380d0080146c80eef" * * ==> storage-provisioner [2f8e753e9024] <== * I0105 19:09:31.330877 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0105 19:09:31.351654 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0105 19:09:31.351845 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0105 19:09:31.373256 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0105 19:09:31.373948 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8ba45711-cf49-4e75-8d97-26c3ca50b377", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-unfixed_3af6ad1b-d1e0-4c38-a353-fe3c721c3b5a became leader I0105 19:09:31.374519 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_cluster-unfixed_3af6ad1b-d1e0-4c38-a353-fe3c721c3b5a! I0105 19:09:31.476255 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_cluster-unfixed_3af6ad1b-d1e0-4c38-a353-fe3c721c3b5a! * * ==> storage-provisioner [cb3015f51c52] <== * I0105 19:09:00.741180 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0105 19:09:30.743837 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout