Deploying OpenShift Origin Cluster in Cloud

OpenShift is Red Hat’s platform-as-a-service offering for hosting and scaling applications. It’s built on top of Google’s popular Kubernetes system.

There are several options for developers to get a hands-on preview of the OpenShift platform and experience the power of kubernetes powered container orchestration platform, simplest among them is oc cluster up, simply fire this command on a system where docker and OpenShift client tool (oc) installed and your OpenShift environment is ready for testing.

Then there is MiniShift,  minishift is a clone of Minikube, This tool runs OpenShift locally using a single node OpenShift cluster in a virtual machine using a driver, such as kvm, xhyve, or Hyper-V.

All good but if you are an IT administrator or a person who is more into IT infrastructure side, then Minishift or oc cluster up will not  that much helpful for you, these tools will not give you the actual view of how stuff works behind the scene and playground to learn and understand how various components and functionalities work within the OpenShift cluster, stuff like openshift-sdn, Out of Resource Handling, Pod scheduling and several other things.

To get hands-on knowledge on managing OpenShift Container Platform clusters, you need to set up a multi-node openshift cluster. Mimicing exactly like how openshift used in production environment.  

In this blog post, we’ll install 3 nodes OpenShift Origin Cluster on a public cloud platform. We will also point a custom domain to the openshift cluster for DNS.

At the end, you’ll be able to start accessing your openshift cluster from anywhere in the world and try out administrative tasks those are otherwise not possible to test on MiniShift.

Container platforms do not need any special hardware feature to work, unlike virtualization platform which relies on hardware virtualization (Intel V) feature of the processor. This allows running container platform anywhere, any cloud platform. I used Hetzner cloud, but you can use any platform as per your choice.

I registered a real domain name to create test openshift environment, why ?

When I started setting up a self-contained openshift cluster, I ran into the problem of how to resolve hostnames, local cluster addresses, application routes, and importantly external names correctly.

I tried setting up my own bind server, dnsmasq, host configuration, and also, things worked but not at the satisfactory level, I could not share the cluster with my friend from another country, there was a lot of overhead to manage the DNS.

That’s when I  checked the price to register TLD (Top Level Domain), It was cheap, less than 2$ for a year, with the domain name purchase, managing DNS become so so easy and worry free too.

Step1: Creating the infrastructure

1. Register a domain, chose the cheapest domain offered at the moment in the market, I chose .online, which cost me 99 INR for a year on, I found bigrock comparatively better over bigdaddy and cheaper too. Their DNS manager is simple to use. I registered domain.


2. Create three VMS on your public cloud, again you can use any public cloud as per your choice and availability, I am using Hetzner Cloud, Hetzner is a Germen hosting company, they offer 2gb memory sever for cost of 200 INRa month, and their service is also good, CLI and GUI options are available to access the cloud.

[eprasad@eprasad log]$ hcloud server create \
–name –image centos-7 –type cx41 \
–ssh-key eprasad@eprasad9s [====================================================================] 100%
Server 1635164 created

Repete this for and then run ‘hcloud server list’  command to know the public IP address of each server.

[eprasad@eprasad log]$ hcloud server list -o columns=Name,IPV4

3. Add DNS A records in the domain DNS manager to map the IP address with FQDN of each server.  Reference Article: This is how my end mapping looks like.Note that the IP address and FQDN matching as per the output of the previous command. Note the wildcard entry too, * is pointing to master node IP address where openshift router will run.

Step2 : Setting up the infrastructure to deploy OpenShift Cluster.

Before installing OKD, we need to configure the hosts to meet OpenShift requirements that include following important tasks:

  • Installing base packages
  • Installing Docker
  • Configuring Docker Storage
  • Adding Insecure registry

Let’s begin with installing ansible package on the master node and create ansible inventory  listing all three nodes in the cluster to levearge ansible to automate above tasks from
master node.

1. Install Ansible and Create ansible inventory, ssh into the master node, install ansible as your first task. 

[root@master ~]# yum install ansible -y

  2. Create Ansible inventory.

[root@master ~]# cat <<EOF > /etc/ansible/hosts

3. Enable Passwordless login from master to all the nodes.

a. Generate the ssh public key on the master system.

[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/
The key fingerprint is:
The key’s randomart image is:
+—[RSA 2048]—-+
|E.. .o |
|o=o+ +.. |
|Bo+o.+..o. . |
|=.B =. oo |
| B * .S. |
|+ . . B . . . |
| . o B . o |
| . ooo . |
| .+B=. |

b. Copy the key to all the hosts in the cluster.

[root@master ~]# ansible all -m authorized_key -a “user=root key='{{lookup(‘file’, ‘/root/.ssh/’)}}'” –ask-pass

4. Install the necessary Packages

a. Install Base Packages

[root@master ~]#ansible all -m yum -a ‘name=wget,git,net-tools,bind-utils,yum-utils,iptables-services,bridge-utils,bash-completion,kexec-tools,sos,psacct,docker-1.13.1 state=present’

b. Install epel repo

[root@master ~]#ansible all -m yum -a ‘name=,
centos-release-openshift-origin310 state=present’c.

c.Install ansible-openshift package

[root@master ~]#ansible all -m yum -a ‘name=openshift-ansible state=present’

 5. Configuring Docker storage using loop back device backed with file storage. 

a. Create /volume file of 10 GB size on all the hosts.

[root@master ~]# ansible all -m shell -a ‘/usr/bin/dd if=/dev/zero of=/volume bs=1M count=10000’ | SUCCESS | rc=0 >>
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 16.7551 s, 626 MB/ | SUCCESS | rc=0 >>
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 125.735 s, 83.4 MB/s | SUCCESS | rc=0 >>
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 245.13 s, 42.8 MB/s

b. Set up loop back device using /volume file.

[root@master ~]# ansible all -m shell -a ‘losetup loop1 /volume’ | SUCCESS | rc=0 >> | SUCCESS | rc=0 >> | SUCCESS | rc=0 >>

c. Update /etc/sysconfig/docker-storage-setup to use /dev/loop1
device for docker volume group.

[root@master ~]#ansible all -m lineinfile -a “line=’DEVS=/dev/loop1\nVG=docker-vg’ path=/etc/sysconfig/docker-storage-setup”

d. Run docker-storage-setup command to complete docker storage setup

[root@master ~]#ansible all -m shell -a ‘/usr/bin/docker-storage-setup’

6. Configuring insecure docker registry.

[root@master ~]#ansible all -m shell -a “echo “OPTIONS=’–insecure-registry′” >> /etc/sysconfig/docker”

Step3 : Create ansible inventory file for OpenShift Deployment and Deploy IT

 1. Create ansible inventory file 

[root@master ~]#cat /opt/inventory

# Set variables common for all OSEv3 hosts
# SSH user, this user should allow ssh based auth without requiring a password

# If ansible_ssh_user is not root, ansible_become must be set to true

#subdomain for route

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{‘name’: ‘htpasswd_auth’, ‘login’: ‘true’, ‘challenge’: ‘true’, ‘kind’: ‘HTPasswdPasswordIdentityProvider’,’filename’: ‘/etc/origin/master/htpasswd’}]

# host group for masters

# host group for etcd

# host group for nodes, includes region info
[nodes] openshift_node_group_name=’node-config-master-infra’ openshift_node_group_name=’node-config-compute’ openshift_node_group_name=’node-config-compute’

2  : Start the deployment, first run the /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml playbook abd then /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml to complete the deployment.

[root@master ~]# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml -i /opt/inventory

[root@master ~]# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml -i /opt/inventory

4.  Create a new user and set password

[root@master ~]#htpassword -b /etc/orgin/master/htpasswd admin

Done, I can access the openshift dashboard by simply typeing in the browser tab 🙂

Leave a reply:

Your email address will not be published.

Site Footer