Site icon UnixArena

How to Deploy Kubernetes ? Minikube on RHEL/CentOS

MiniKube - Kubernetes logo

How to create a Kubernetes’s sandbox environment?.  How to experience Kubernetes in Laptop/Desktop? MiniQube is developed for desktop/Laptop environment to experience the Kubernetes cluster.  “Minikube” runs a single-node Kubernetes cluster inside the Virtual Machine on Laptop/Desktop with help of virtualization technology (Virtual Box, KVM, VMware Fusion). This article will walk through the deployment of  Minikube on RHEL 7 / CentOS 7 using KVM virtualization.

 

Note: Virtualization (VT) required only to create the VM for MiniKube.  (Not mandatory for actual Kubernetes deployment)

 

Envirnonemt:

 

Installing & Configuring KVM (Virtualization Technology)

1.  Login to RHEL 7/CentOS 7 and install KVM packages.

[root@kubebase ~]# yum -y install qemu-kvm libvirt libvirt-daemon-kvm
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirrors.viethosting.com
 * extras: mirrors.viethosting.com
 * updates: mirrors.viethosting.com
base                                                                                                                                             | 3.6 kB  00:00:00
extras                                                                                                                                           | 3.4 kB  00:00:02
updates                                                                                                                                          | 3.4 kB  00:00:00
(1/4): base/7/x86_64/primary_db                             | 6.0 MB  00:00:03
(2/4): base/7/x86_64/group_gz                               | 166 kB  00:00:16
(3/4): extras/7/x86_64/primary_db                           | 201 kB  00:00:17
(4/4): updates/7/x86_64/primary_db                          | 5.0 MB  00:01:33
Resolving Dependencies

 

2. Start KVM services and enable to make it persistent across reboot.

[root@kubebase ~]#  systemctl start libvirtd
[root@kubebase ~]# systemctl enable libvirtd

 

3. Ensure that Laptop/Desktop is supporting the VT technology.

[root@kubebase ~]# virt-host-validate
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpu' controller mount-point                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'cpuset' controller mount-point                  : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'devices' controller mount-point                 : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for cgroup 'blkio' controller mount-point                   : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
   LXC: Checking for Linux >= 2.6.26                                         : PASS
   LXC: Checking for namespace ipc                                           : PASS
   LXC: Checking for namespace mnt                                           : PASS
   LXC: Checking for namespace pid                                           : PASS
   LXC: Checking for namespace uts                                           : PASS
   LXC: Checking for namespace net                                           : PASS
   LXC: Checking for namespace user                                          : PASS
   LXC: Checking for cgroup 'memory' controller support                      : PASS
   LXC: Checking for cgroup 'memory' controller mount-point                  : PASS
   LXC: Checking for cgroup 'cpu' controller support                         : PASS
   LXC: Checking for cgroup 'cpu' controller mount-point                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller support                     : PASS
   LXC: Checking for cgroup 'cpuacct' controller mount-point                 : PASS
   LXC: Checking for cgroup 'cpuset' controller support                      : PASS
   LXC: Checking for cgroup 'cpuset' controller mount-point                  : PASS
   LXC: Checking for cgroup 'devices' controller support                     : PASS
   LXC: Checking for cgroup 'devices' controller mount-point                 : PASS
   LXC: Checking for cgroup 'blkio' controller support                       : PASS
   LXC: Checking for cgroup 'blkio' controller mount-point                   : PASS

 

Ensure “firewalld” service is up and running.

[root@kubebase ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-05-21 14:36:01 EDT; 4min 10s ago
     Docs: man:firewalld(1)
 Main PID: 740 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─740 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

May 21 14:35:58 kubebase systemd[1]: Starting firewalld - dynamic firewall daemon...
May 21 14:36:01 kubebase systemd[1]: Started firewalld - dynamic firewall daemon.
[root@kubebase ~]#

You might get the following error if you don’t enable firewalld.

12575 start.go:529] StartHost: create: Error creating machine: Error in driver during machine creation: creating network: creating network minikube-net: virError(Code=89, Domain=47, Message=’The name org.fedoraproject.FirewallD1 was not provided by any .service files’)

X Unable to start VM: create: Error creating machine: Error in driver during machine creation: creating network: creating network minikube-net: virError(Code=89, Domain=47, Message=’The name org.fedoraproject.FirewallD1 was not provided by any .service files’)

* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
– https://github.com/kubernetes/minikube/issues/new

 

Configure Kubernetes Repo:

4. Configure Kubernetes repo to install Kubernetes components.

[root@kubebase yum.repos.d]# cd /etc/yum.repos.d
[root@kubebase yum.repos.d]# cat Kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
[root@kubebase yum.repos.d]# pwd
/etc/yum.repos.d
[root@kubebase yum.repos.d]#

 

5. Install “kubectl” binary.

[root@kubebase yum.repos.d]# yum -y install kubectl
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.viethosting.com
 * extras: mirrors.viethosting.com
 * updates: mirrors.viethosting.com
kubernetes/x86_64/signature                                                                                                                      |  454 B  00:00:00
Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg
Importing GPG key 0xA7317B0F:
 Userid     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
 Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
 From       : https://packages.cloud.google.com/yum/doc/yum-key.gpg
Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
kubernetes/x86_64/signature                                                                                                                      | 1.4 kB  00:00:00 !!!
kubernetes/x86_64/primary                                                                                                                        |  49 kB  00:00:02
kubernetes                                                                                                                                                      351/351
Resolving Dependencies
--> Running transaction check

<<<<<< Output Truncated >>>>>

Running transaction
  Installing : kubectl-1.14.2-0.x86_64                                                                                                                              1/1
  Verifying  : kubectl-1.14.2-0.x86_64                                                                                                                              1/1

Installed:
  kubectl.x86_64 0:1.14.2-0
Complete!

 

6.  Download the following components from Google repository.

[root@kubebase ~]# wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 -O minikube
--2019-05-21 15:10:12--  https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
Resolving storage.googleapis.com (storage.googleapis.com)... 173.194.73.128, 2a00:1450:4010:c05::80
Connecting to storage.googleapis.com (storage.googleapis.com)|173.194.73.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 41728440 (40M) [application/octet-stream]
Saving to: ‘minikube’

100%[=======================================================>] 41,728,440  1.11MB/s   in 37s

2019-05-21 15:10:54 (1.08 MB/s) - ‘minikube’ saved [41728440/41728440]

[root@kubebase ~]# wget https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2
--2019-05-21 15:11:14--  https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2
Resolving storage.googleapis.com (storage.googleapis.com)... 64.233.161.128, 2a00:1450:4010:c0e::80
Connecting to storage.googleapis.com (storage.googleapis.com)|64.233.161.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 37581096 (36M) [application/octet-stream]
Saving to: ‘docker-machine-driver-kvm2’

100%[========================================================>] 37,581,096  1.18MB/s   in 39s

2019-05-21 15:11:59 (952 KB/s) - ‘docker-machine-driver-kvm2’ saved [37581096/37581096]

[root@kubebase ~]#

 

7. Modify the file permission and move the binary to the command search path.

[root@kubebase ~]# chmod 755 minikube docker-machine-driver-kvm2
[root@kubebase ~]# mv minikube docker-machine-driver-kvm2 /usr/local/bin/
[root@kubebase ~]#

 

8. Check the “minikube” version.

[root@kubebase ~]# minikube version
minikube version: v1.1.0
[root@kubebase ~]# kubectl version -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "14",
    "gitVersion": "v1.14.2",
    "gitCommit": "66049e3b21efe110454d67df4fa62b08ea79a19b",
    "gitTreeState": "clean",
    "buildDate": "2019-05-16T16:23:09Z",
    "goVersion": "go1.12.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

 

Deploying Minikube Cluster:

8. Start the minikube using KVM driver.

[root@kubebase ~]# minikube start --vm-driver kvm2
* minikube v1.1.0 on linux (amd64)
* Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
* Restarting existing kvm2 VM for "minikube" ...
* Waiting for SSH access ...
* Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
* Relaunching Kubernetes v1.14.2 using kubeadm ...
* Verifying: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
[root@kubebase ~]# 

 

If  you get the following error, just delete the “minikube” VM and re-create it

Tip: Use ‘minikube start -p ‘ to create a new cluster, or ‘minikube delete’ to delete this one.
E0522 03:48:03.458345 1488 start.go:529] StartHost: Error getting state for host: getting connection: looking up domain: virError(Code=0, Domain=0, Message=’Missing error’)

X Unable to start VM: Error getting state for host: getting connection: looking up domain: virError(Code=0, Domain=0, Message=’Missing error’)

* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
– https://github.com/kubernetes/minikube/issues/new

 

Delete the “minikube” VM using the following command.

[root@kubebase ~]# minikube delete
* Deleting "minikube" from kvm2 ...
* The "minikube" cluster has been deleted.
[root@kubebase ~]# minikube start --vm-driver kvm2
* minikube v1.1.0 on linux (amd64)
* Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
* Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6

* Downloading kubeadm v1.14.2
* Downloading kubelet v1.14.2

X Failed to get driver URL: connection is shut down

* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
  - https://github.com/kubernetes/minikube/issues/new
[root@kubebase ~]#

 

Validating Minikube Health status:

9. Check the Kubernetes cluster status and ensure all the components are running

[root@kubebase ~]# minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.39.250

 

10. Checking the minikube service list. Dashboard namespace is missing here.

[root@kubebase ~]# minikube service list
|-------------|------------|--------------|
|  NAMESPACE  |    NAME    |     URL      |
|-------------|------------|--------------|
| default     | kubernetes | No node port |
| kube-system | kube-dns   | No node port |
|-------------|------------|--------------|

 

11. Checking the “Minikube” docker environment.

[root@kubebase ~]# minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.39.250:2376"
export DOCKER_CERT_PATH="/root/.minikube/certs"
export DOCKER_API_VERSION="1.39"
# Run this command to configure your shell:
# eval $(minikube docker-env)

 

12. Checking the Kubernetes cluster info.

[root@kubebase ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.39.250:8443
KubeDNS is running at https://192.168.39.250:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

 

13. Get the Kubernetes node status.

[root@kubebase ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready       2m33s   v1.14.2
[root@kubebase ~]# virsh list
 Id    Name                           State
----------------------------------------------------
 1     minikube                       running
[root@kubebase ~]#

 

14. How to access the “Minikube” VM and how to check the running containers for K8s? Execute “minikube ssh

[root@kubebase ~]# minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ hostname
minikube
$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
ad8a971620ca        eb516548c180           "/coredns -conf /etc…"   2 minutes ago       Up 2 minutes                            k8s_coredns_coredns-fb8b8dccf-8nr2q_kube-system_754dcbdd-7c6a-11e9-ac49-3c4a73c3bd3b_1
33f143a9716e        eb516548c180           "/coredns -conf /etc…"   2 minutes ago       Up 2 minutes                            k8s_coredns_coredns-fb8b8dccf-5szrq_kube-system_7552ebd8-7c6a-11e9-ac49-3c4a73c3bd3b_1
24fb3d16c349        4689081edb10           "/storage-provisioner"   3 minutes ago       Up 3 minutes                            k8s_storage-provisioner_storage-provisioner_kube-system_77744985-7c6a-11e9-ac49-3c4a73c3bd3b_0
a6e71c8b6a48        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_storage-provisioner_kube-system_77744985-7c6a-11e9-ac49-3c4a73c3bd3b_0
8eab3bbc36dc        5c24210246bb           "/usr/local/bin/kube…"   3 minutes ago       Up 3 minutes                            k8s_kube-proxy_kube-proxy-h7xtp_kube-system_754e770e-7c6a-11e9-ac49-3c4a73c3bd3b_0
6dfc217ab40e        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_coredns-fb8b8dccf-5szrq_kube-system_7552ebd8-7c6a-11e9-ac49-3c4a73c3bd3b_0
b74e2bf106ea        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-proxy-h7xtp_kube-system_754e770e-7c6a-11e9-ac49-3c4a73c3bd3b_0
65abe116b7fc        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_coredns-fb8b8dccf-8nr2q_kube-system_754dcbdd-7c6a-11e9-ac49-3c4a73c3bd3b_0
48f7a41de231        5eeff402b659           "kube-apiserver --ad…"   3 minutes ago       Up 3 minutes                            k8s_kube-apiserver_kube-apiserver-minikube_kube-system_f0c7fec2368e56b97aab5eecfcc129ce_0
d6f51369e061        2c4adeb21b4f           "etcd --advertise-cl…"   3 minutes ago       Up 3 minutes                            k8s_etcd_etcd-minikube_kube-system_949db6759563e191943a9567caecc738_0
1f6ba3ce6775        119701e77cbc           "/opt/kube-addons.sh"    3 minutes ago       Up 3 minutes                            k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_0
4f7e30421a8b        8be94bdae139           "kube-controller-man…"   3 minutes ago       Up 3 minutes                            k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_9c1e365bd18b5d3fc6a5d0ff10c2b125_0
f2ebfba2662f        ee18f350636d           "kube-scheduler --bi…"   3 minutes ago       Up 3 minutes                            k8s_kube-scheduler_kube-scheduler-minikube_kube-system_9b290132363a92652555896288ca3f88_0
e250137f88af        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-apiserver-minikube_kube-system_f0c7fec2368e56b97aab5eecfcc129ce_0
a65431299b17        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_etcd-minikube_kube-system_949db6759563e191943a9567caecc738_0
c6534ed33926        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-scheduler-minikube_kube-system_9b290132363a92652555896288ca3f88_0
d719dee6a5ea        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-addon-manager-minikube_kube-system_0abcb7a1f0c9c0ebc9ec348ffdfb220c_0
590ab88ce56d        k8s.gcr.io/pause:3.1   "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_kube-controller-manager-minikube_kube-system_9c1e365bd18b5d3fc6a5d0ff10c2b125_0
$ 

 

15. To check the Kubeneters components version, execute the following command.

[root@kubebase ~]# kubectl version -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "14",
    "gitVersion": "v1.14.2",
    "gitCommit": "66049e3b21efe110454d67df4fa62b08ea79a19b",
    "gitTreeState": "clean",
    "buildDate": "2019-05-16T16:23:09Z",
    "goVersion": "go1.12.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "serverVersion": {
    "major": "1",
    "minor": "14",
    "gitVersion": "v1.14.2",
    "gitCommit": "66049e3b21efe110454d67df4fa62b08ea79a19b",
    "gitTreeState": "clean",
    "buildDate": "2019-05-16T16:14:56Z",
    "goVersion": "go1.12.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}
[root@kubebase ~]#

You could also list the “minikube” VM using the virsh command.

[root@kubebase yum.repos.d]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 2     minikube                       running

[root@kubebase yum.repos.d]#

We have successfully deployed “Minikube” on RHEL 7/CentOS 7. Kubernetes dashboard namespace is missing in the service list. In the next article, we will deploy the dashboard and access it.

Share it! Comment it!! Be Sociable!!!

Exit mobile version