A few gotchas with OpenShift docker-registry

Here are few gotchas when working with the OpenShift docker registry. These are quite useful if you run OpenShift as a demo or testing environment:

  1. Using the AllowAllIdentityProvider will prevent you from login to the registry. For a not know reason yet, if you are using this provider (which comes by default with oc cluster up), then any login attempt to the docker registry will fail, even with a valid token.
  2. Pulling from the registry will result in a 404 error if the public URL is not added to the –insecure-registry of the local host.


Install openshift web console on origin 3.9

At the time of writing it looks like that openshift web console cannot install correctly on OpenShift Origin.

Here is how to fix it:

wget https://raw.githubusercontent.com/openshift/origin/master/install/origin-web-console/console-config.yaml
wget https://raw.githubusercontent.com/openshift/origin/master/install/origin-web-console/console-template.yaml

Edit the console-config.yaml file to match your requirements

< pre >
oc create namespace openshift-web-console
sed -i s# console-config.yaml
oc process -f console-template.yaml -p “API_SERVER_CONFIG=$(cat console-config.yaml)” | oc apply -n openshift-web-console -f –
oc patch oauthclient openshift-web-console -p ‘{ “redirectURIs” : [ “https://console.apps.example.com:8443/” ] }

Log prometheus alerts in a file when monitoring OpenShift

I am starting to be a big fan of Prometheus and mainly when it is used to monitor OpenShift.

Here is a quick hack for those of you who wants to log alerts in a file when they are processed by alertmanager. 

Just a small reminder of how Prometheus and Alertmanager works together:

  • Prometheus is responsible of scraping metrics from exporter and at the same time, you can define alerts on these metrics using the prometheus alerting rule language. These alerts have condition, time boxing, summary and severity. We usually define prometheus alerts in .rules files located in /etc/prometheus-rules.


  • Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email or more simply webhooks. The Alertmanager configuration usually resides in /etc/alertmanager/config.yml and it allows you to define routes, which basically describe what happen to an alert when it fires.

Alertmanager supports a few receivers which can process your alerts. A simple, yet generic one, is the webhook receiver. And for a custom need, I had to write a simple file-webhook that only append the request body to a file named alerts.log, so it can be used later for tracing or whatever. The good thing is that it is a simple nodejs app, and it uses nodejs s2i image provided by OpenShift. So, you just have to:

oc project monitoring
oc new-build  openshift/nodejs~https://github.com/akram/prometheus-file-webhook.git \

Once the build is finished, you can deploy the file-webhook app and add a persistent volume to it.

oc new-app file-webhook
oc volume --add dc/file-webhook -t pvc --name=alerts --type=persistentVolumeClaim \
          --claim-name=alerts --claim-size='1Gi' --mount-path=/opt/app-root/src/logs

Then, you will have new service called file-webhook in your project which we can plug to alertmanager to write alerts to file. To make alertmanager use the service, edit your alertmanager configuration by using oc edit configmap alertmanager-configmap and change the value for the config.yml key to:

  resolve_timeout: 5m

  group_by: ['alertname']
  group_wait: 10s
  group_interval: 10s
  repeat_interval: 1h
  receiver: 'web.hook'
- name: 'web.hook'
  - url: 'http://file-webhook.monitoring.svc.cluster.local:8080/'

Then, everytime your alerts fire, you will see a new line in alerts.log, something like this:

"receiver":"web\\.hook","status":"firing","alerts":[{"status":"resolved","labels":{"alertname":"PodRestartingTooOften","container":"alert-file-logger2","instance":"","job":"kubernetes-service-endpoints","kubernetes_name":"kube-state-metrics","namespace":"monitoring","pod":"alert-file-logger2-1-61h5p","severity":"page"},"annotations":{"DESCRIPTION":"Pod monitoring/alert-file-logger2-1-61h5p restarting more than once times during last 2 hours.","SUMMARY":"Pod monitoring/alert-file-logger2-1-61h5p restarting more than once times during last 2 hours."},"startsAt":"2017-11-14T09:41:37.803Z","endsAt":"2017-11-14T10:10:38.254Z","generatorURL":"http://prometheus-15-p4lbc:9090/graph?g0.expr=rate%28kube_pod_container_status_restarts%5B2h%5D%29+%2A+7200+%3E+1\u0026g0.tab=1"},{"status":"firing","labels":{"alertname":"PodRestartingTooOften","container":"prometheus","instance":"","job":"kubernetes-service-endpoints","kubernetes_name":"kube-state-metrics","namespace":"monitoring","pod":"prometheus-15-k2fj8","severity":"page"},"annotations":{"DESCRIPTION":"Pod monitoring/prometheus-15-k2fj8 restarting more than once times during last 2 hours.","SUMMARY":"Pod monitoring/prometheus-15-k2fj8 restarting more than once times during last 2 hours."},"startsAt":"2017-11-14T09:41:37.803Z","endsAt":"0001-01-01T00:00:00Z","generatorURL":"http://prometheus-15-p4lbc:9090/graph?g0.expr=rate%28kube_pod_container_status_restarts%5B2h%5D%29+%2A+7200+%3E+1\u0026g0.tab=1"},{"status":"firing","labels":{"alertname":"PodRestartingTooOften","container":"hawkular-openshift-agent","instance":"","job":"kubernetes-service-endpoints","kubernetes_name":"kube-state-metrics","namespace":"monitoring","pod":"hawkular-openshift-agent-wqdgn","severity":"page"},"annotations":{"DESCRIPTION":"Pod monitoring/hawkular-openshift-agent-wqdgn restarting more than once times during last 2 hours.","SUMMARY":"Pod monitoring/hawkular-openshift-agent-wqdgn restarting more than once times during last 2 hours."},"startsAt":"2017-11-14T09:41:37.803Z","endsAt":"0001-01-01T00:00:00Z","generatorURL":"http://prometheus-15-p4lbc:9090/graph?g0.expr=rate%28kube_pod_container_status_restarts%5B2h%5D%29+%2A+7200+%3E+1\u0026g0.tab=1"}],"groupLabels":{"alertname":"PodRestartingTooOften"},"commonLabels":{"alertname":"PodRestartingTooOften","instance":"","job":"kubernetes-service-endpoints","kubernetes_name":"kube-state-metrics","namespace":"monitoring","severity":"page"},"commonAnnotations":{},"externalURL":"http://alertmanager-16-zxx5j:9093","version":"4","groupKey":"{}:{alertname=\"PodRestartingTooOften\"}"}

Voilà, enjoy, and feel free to share.

Disk space not reclaimed after deleting log files

Hello World,

if you get out of disk space and delete log files, and you don’t see your disk space reclaimed, you may have found an issue that I faced with rsyslog not releasing rotatable files.

To be sure:

lsof | grep deleted

In the first column, you may see the process still handling the file descriptor unclosed and preventing the disk usage reclaim even after deletion.

The solution: systemctl restart rsyslog.service or kill the guilty process.

Improve your build speed: Run a proxy in OpenShift

Many build processes uses external source code or library repositories only available in the internet. That is the case for NPM (Node Package Manager, used for NodeJS applications compilation) or Maven (when building Java applications).

Thus running an HTTP Proxy inside of OpenShift could be helpful in many cases:
– in a corporate environment it is not an exception to face proxy that requires authentication. And even if builds mechanism in OpenShift supports it, you will have to put your credentials somewhere and they may be visible on logs or source code
– your corporate proxy will certainly not cache all the artefacts that you frequently use, so doing it inside of your own proxy may save you several minutes for build time and several gigs of downloads

In this blog, we will learn how to setup an HTTP/HTTPS proxy in OpenShift that is able to forward requests to an upstream corporate HTTP Proxy and also act as a cache but with no persistent volume.

A CentOS/Squid docker image

Unfortunately, I was not able to find a publicly available and reliable docker image that fits my needs so I decided to write my own based on CentOS7 and squid. The sources are available on my GitHub repository and the image is on Docker Hub.
Some important features about this image:
– it can run as any UID which is very good for OpenShift
– it exposes port 3128 as an usual squid proxy image
– it accepts an environment variable named CACHE_PEER which can handle an upstream proxy URL in the form url_encoded_username:url_encoded_password:proxy_hostname:port
– it allows CONNECT for any traffic, mainly used for SSL and it does not perform SSL interception (this is why we used CONNECT)
If you want to test it, simply run it with docker:

 docker run -d --name="proxy" -p 3128:3128 \
            -e "CACHE_PEER=user:secret@upstream-proxy.corp.mycompany.com:8080" \

Deploying the image in OpenShift

Now, let’s see how we can run this image in OpenShift so you can make it usable by other pods. We will be working in the “default” project to ensure that whatever your configuration all the pods can have access to this new service.

oc project default
=> Now using project "default" on server "https://paas.mycompany.com:443".

As stated previously, the image accepts an environment variable to target the upstream proxy server.
This is the only thing we need to start it. We just need to use the great oc new-app command with some arguments:

oc new-app docker.io/akrambenaissi/docker-squid --name=proxy \
                     -e 'CACHE_PEER=user:secret@upstream-proxy.corp.mycompany.com:8080'
--> Found Docker image 95aeb47 (About an hour old) from docker.io for "docker.io/akrambenaissi/docker-squid"
    * An image stream will be created as "proxy:latest" that will track this image
    * This image will be deployed in deployment config "proxy"
    * Ports 3128/tcp will be load balanced by service "proxy"
--> Creating resources with label app=proxy ...
    ImageStream "proxy" created
    DeploymentConfig "proxy" created
    Service "proxy" created
--> Success
    Run 'oc status' to view your app.

The installation only takes a few minutes required to pull the image from Docker Hub. Once done, we can see the relevant pod using the oc get pods command:

oc get pods
NAME                    READY     STATUS    RESTARTS   AGE
proxy-1-vz1g3    1/1       Running   0          6m

Accessing your proxy from inside of the cluster

In a multi-tenant OpenShift cluster, pods within different namespaces are isolated and can’t reach other thanks to the network isolation feature give by OpenShift-sdn. There is an exception to this: pods deployed in namespace “default” can be reached by all other pods.
Moreover, OpenShift have an internal DNS which allows processes in pods to performs name resolution within the cluster.
Thanks to this mechanism, our proxy pod cluster IP address will be resolved by the name squid.squid.svc.cluster.local at reachable on port 3128.

So, if you need for exemple to refer to a proxy in an STI based build, just put the following lines in your .sti/environment at the root level of your project on git:


Then, enjoy a speeder STI build.

Accessing your proxy from everywhere

In other cases, you want your proxy to be reachable from other system that don’t run on OpenShift, like an external Software Factory. For this specific scenario, we need a special setup.

Indeed, HTTP Proxies use a specific communication on HTTP which cannot be relayed across proxies themselves. Thus, it is not possible to use an OpenShift Route and the openshift-router to expose our brand new proxy to the rest of the world.

However, there is a very powerful feature available in OpenShift used to expose non HTTP/HTTPs/SNI services on all nodes of the cluster: it is called NodePort.
NodePort is a special Service configuration that opens a given port on all OpenShift nodes and redirect trafic to the underlying pods using iptables and kube proxy.

We will need to create a Service which does not handle a clusterIP but a nodePort on port 31280: OpenShift has a reserved (configurable) port range for nodePort services. Default values are between 30’000 and 32’767.

oc create -f - << EOF
apiVersion: v1
kind: Service
  annotations: {}
  creationTimestamp: null
    app: proxy
  name: proxy-node-port
  - name: 3128-tcp
    port: 3128
    protocol: TCP
    nodePort: 31280
    app: proxy
    deploymentconfig: proxy
  sessionAffinity: None
  type: NodePort
  loadBalancer: {}

=>  You have exposed your service on an external port on all nodes in your
cluster.  If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:31280) to serve traffic.

See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.
service "proxy-node-port" created

Now, your proxy can be reached on any node, on port 31280. If you do have a VIP or a LoadBalancer in front of your nodes, your service will even be load balanced.

Keep in mind that you may need to restrict access to this service to avoid its usage by unwanted people.

Deploy GitLab on OpenShift

GitLab is a great web git repository application for everyone that wants to run his own Git repository at home or office. Unfortunately, the home made GitLab installation requires some skills that I don’t have to learn. The good thing is that some Docker images exists on Docker Hub, even the one from GitLab team. In this blog post, we will use
the sameersbn docker-gitlab image which demonstrated to working very well, supports volumes and also bring separate containers for postgresql and redis.

Installing postgres

For convenience reasons (and also for support if your are using OpenShift Enterprise), we are using the PostgreSQL image provided by OpenShift team to start ou postgresql instance. The image supports persistent volumes and will create the Persistent Volume Claim for you.

Simply use the oc new-app command to get your PostgreSQL instance up and running. Note that there an issue with this image that runs with a predefined user, you will have to allow it to run as AnyUid by using the corresponding Security Context Constraint.

oc new-app --template=postgresql-persistent \
--> Deploying template postgresql-persistent in project openshift for "postgresql-persistent"
     With parameters:
--> Creating resources ...
    Service "postgresql" created
    Persistentvolumeclaims "postgresql" created
    DeploymentConfig "postgresql" created

Configuring Security Context

Some of the containers that we will use need to run as root (GitLab) or any other user (Postgres has hardcoded user 26 but this will be fixed soon).
Hence, it is required to used the project’s service account (here we created a project called gitlab) to the SCC named anyuid.

oc edit scc anyuid
  type: RunAsAny
- system:serviceaccount:gitlab:default

This configuration will work for postgres and gitlab image.

Installing Redis

Use the oc new-app command here again and directly pass it the Docker image name and you will get a redis instance up and running in seconds.

oc new-app  sameersbn/redis
    Service "redis" created
--> Success
    Run 'oc status' to view your app.
The new-app command will create Services, EndPoints and associated pods.
It is still required to add a persistent volume to the Deployment Configuration using the following command:
oc volume dc/redis --add --overwrite -t persistentVolumeClaim \
                        --claim-name=redis-data --name=redis-volume-1 \

Installing GitLab itself

The sameersbn image allows several parameters to be injected in order to configure the GitLab instance to be created.
For some reasons Services name resolutions are not working with the provided startup script, although going into the container and pinging the services works.
So, we will inject the PostgreSQL and Redis Services IP addresses manually using the parameters.
To get the Services IP addresses:

oc get svc postgresql redis
NAME         CLUSTER_IP     EXTERNAL_IP   PORT(S)    SELECTOR                                     AGE
postgresql           5432/TCP   app=postgresql,deploymentconfig=postgresql   1h
redis           6379/TCP   app=redis,deploymentconfig=redis             1h

Use these IP addresses to start the GitLab container, again by using the new-app command:
One important thing to note: You need to use the --name parameter and the name to anything else than gitlab otherwise all your OpenShift injected environment variables will be named GITLAB_* , and gitlab already uses some of those. In our case the variables will be name GITLAB_CE_* which fixes troubles.

oc new-app sameersbn/gitlab --name=gitlab-ce 
                             -e 'GITLAB_HOST=http://gitlab.apps.mycompany.com' \
                             -e 'DB_TYPE=postgres' -e 'DB_HOST=' \ 
                             -e 'DB_PORT=5432'    -e 'DB_NAME=gitlab'   -e 'DB_USER=admin' \
                             -e 'DB_PASS=admin'   -e 'REDIS_HOST= -e 'REDIS_PORT=6379' \
                             -e 'GITLAB_SECRETS_DB_KEY_BASE=1234567890' -e 'SMTP_ENABLED=true' \
                             -e 'SMTP_HOST=smtp.mycompany.com' -e 'SMTP_PORT=25' \
                             -e 'GITLAB_EMAIL=no-reply@mycompany.com'
    Service "gitlab-ce" created
--> Success
    Run 'oc status' to view your app.

Of course, do not forget to add the volumes to make your repositories and logs persistent:

oc volumes dc/gitlab-ce --add --claim-name=gitlab-log --mount-path=/var/log/gitlab \
                     -t persistentVolumeClaim --overwrite
oc volumes dc/gitlab-ce --add --claim-name=gitlab-data --mount-path=/home/git/data \
                     -t persistentVolumeClaim --overwrite

A word on persistent volumes

The persistent volumes that you will have to create may require specific configuration: This is because both postgresql and postgres uses some hardcoded uid/gid and tries to make chown on some files.
If you are using NFS backed Persistent Volume, you will run into permission denied issues on chmod and chown.
To bypass this, you will have to add supplementalGroups in the DeploymentConfig's SecurityContext:
- for postgres: add 26
- for gitlab-ce: add 1000

You will have then to create the tow persistent volumes and chown to those UID/GID and use all_squash option.

chown -R 26:26 /srv/nfs/pv0001
chow -R 1000:1000 /srv/nfs/pv0002

cat >> /etc/exports << EOF
/srv/nfs/pv0001 *(rw,all_squash)
/srv/nfs/pv0002 *(rw,all_squash)

exportfs -a

Create then your PV and PVC using the following definitions:

apiVersion: v1
kind: PersistentVolume
  creationTimestamp: null
  name: pv0005
  - ReadWriteOnce
  - ReadWriteMany
    storage: 8Gi
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: gitlab-data
    namespace: gitlab
    path: /srv/nfs/pv0005
    server: nfs-server.mycompany.com
  persistentVolumeReclaimPolicy: Retain
status: {}

And for the PVC:

apiVersion: v1
kind: PersistentVolumeClaim
  creationTimestamp: null
  name: gitlab-data
  - ReadWriteOnce
  - ReadWriteMany
      storage: 5Gi
status: {}

Now, you have your GitLab running in OpenShift on the URL http://gitlab.apps.mycompany.com ! enjoy

Run OpenShift console on port 443

One thing that I really like on OpenShift, is that it very often eat its own food. To my opinion, it is generally a sign of a good design, but that’s another story.
In this blog, I wanted to give a clue on how to make the OpenShift console run on port 443 by using the openshift-router facilities, service and endpoints. This could be very useful for example, if you do have some network setup preventing access to port 8443, which is often the case on corporate networks.

As a disclaimer, I want just to state that this is not (well for now) a production-proof design but, at least you can use it for demonstration purposes or simply to understand the way OpenShift external services works.

You will guess that the idea here, is to create an OpenShift external service pointing to the OpenShift master URL and then create a route that will be served by openshift-router to forward request to the OpenShift master itself. It this road, need to create and OpenShift Endpoint as stated by documentation.
And the final trick, is to change your masterPublicURL and master publicURL parameters in master-config.yaml OpenShift configuration to match the route’s URL.

Here is the configuration: You will need to get:
– Your master internal IP address
– A wildcard entry or DNS entry pointing to your openshift-router nodes (can also the be the master itself if you are running the router on master)
– That’s all

So, let’s assume the following settings:
My master’s domaine name is: pass.mycompany.com
My master’s internal IP address is:
My openshift-router runs on IP and my DNS entry pass.mycompany.com points to it

So you need to create a Service:

apiVersion: v1
kind: Service
  creationTimestamp: null
  name: openshift-master
  - name: 8443-tcp
    port: 8443
    protocol: TCP
    targetPort: 8443
  selector: {}
  loadBalancer: {}

and create manually the corresponding Endpoint

apiVersion: v1
kind: Endpoints
  creationTimestamp: null
  name: openshift-master
- addresses:
  - ip:
  - name: 8443-tcp
    port: 8443
    protocol: TCP

And then, you need a route with a host entry point to

apiVersion: v1
kind: Route
  creationTimestamp: null
  name: openshift-master
  host: paas.mycompany.com
    targetPort: 8443
    kind: Service
    name: openshift-master
    termination: passthrough
  ingress: null

and the last point, is to modify your master-config.yaml to change any occurrences to masterPublicURL or publicURL to
Keep in mind that the certificates that you have generated for the console must be valid for the host URL you are pointing to, and must update your corsAllowedOrigins to add the new domain you are pointing to.

- v1
apiVersion: v1
  extensionDevelopment: false
  extensionScripts: null
  extensionStylesheets: null
  extensions: null
  loggingPublicURL: ""
  logoutURL: ""
  masterPublicURL: https://paas.mycompany.com:443
  metricsPublicURL: https://paas.mycompany.com/hawkular/metrics
  publicURL: https://paas.mycompany.com:443/console/
    bindNetwork: tcp4
    certFile: master.server.crt
    clientCA: ""
    keyFile: master.server.key
    maxRequestsInFlight: 0
    namedCertificates: null
    requestTimeoutSeconds: 0
controllerLeaseTTL: 0
controllers: '*'
- localhost
- paas.mycompany.com
disabledFeatures: null

Et voilà!
Your OpenShift master console should now be available on port 443

VPN tunnels through HTTP proxy using SSH

The title of this post is beatufill: 7 words, 3 acronyms equally distributed composed of 3,4 and 3 letters.
But that’s not the topic….instead, today, we will learn how to setup a VPN tunnel using SSH when you are behind proxy.

You will need:
– a first tool: sshuttle
– an SSH client able to receive and process the ProxyCommand directive
– a remote SSH server running on port 443 or 80
– another tool: corkscrew
– a proxy server only allowing HTTP(S) traffic

Let’s describe each in reverse order

Proxy Server

You should not have too much control on it, but if the proxy server requires authentication you should get your pair of credentials. Also, if using corkscrew like here, the proxy must supports CONNECT command, otherwise, you should use httptunnel instead.


corkscrew is a simple tool to tunnel TCP connections through an HTTP proxy supporting the CONNECT method. It reads stdin and writes to std- out during the connection, just like nectat.
We will use it to connect to an SSH server running on a remote 443 port through the HTTPS proxy. To do so, we will need to set corkscrew as the ProxyCommand for our SSH client. If your proxy requires authentication, you have to set the credentials in a separate file, lets say ~/.ssh/corkscrew-authfile with the patten username:password


SSH Server

A raspberry Pi hidden at home or even an AWS Free Tier machine should be sufficient. The required configuration parameter needs the following:

# What ports, IPs and protocols we listen for
Port 22
Port 443

# Authentication:
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile	%h/.ssh/authorized_keys

SSH Client

The SSH client configuration will be setup in your .ssh/config file, so you don’t need to type it every time you want to use your tunnel.

Host my-remote-ssh-server.mydomain.com
  ProxyCommand corkscrew http-nasty-proxy.mycompany.com 8080 my-remote-ssh-server.mydomain.com 443  /Users/Akram/.ssh/corkscrew-authfile

Then, every time you will do:

ssh my-remote-ssh-server.mydomain.com

You will be automagically connected to your SSH box, because the SSH client will delegate its connexion management to corkscrew that will connect to http-nasty-proxy.mycompany.com on port 8080 using the credentials in file /Users/Akram/.ssh/corkscrew-authfile and then convert the SSH commands into HTTP+CONNECT request going to my-remote-ssh-server.mydomain.com on port 443.

That was the most difficult part. Once you are connected to your SSH box, the world is then open to you!


sshuttle is the ultimate tool that we will use: It is a transparent VPN proxy through SSH. sshuttle documentation describes briefly the way it works and gives many example of usages. The one that I uses if simply this command line:

sshuttle  --dns -r user@my-remote-ssh-server.mydomain.com 0/0

Juste not here that my-remote-ssh-server.mydomain.com is the address of the server for which you have setup ProxyCommand configuration. Since sshuttle will use SSH under the cover, you have made the sufficient work to make the connection work (even through HTTPS Proxy).
In my case, I added the –dns option to also allow DNS traffic to go through my tunnel because corporate DNS traffic is blocked.
If the connection succeeds, you will see a message “client connected”.
Et voilà….all your connections will go through sshuttle to reach the internet

Starting with OpenShift v3 : Using the All-In-One Virtual Machine

OpenShift v3 is a PaaS management software relying on innovative technologies allowing you to run your own cloud environment in your private infrastructure, on public clouds or in hybrid way.

To get familiar with OpenShift, the best thing to do, is try to install it on your (muscled (8GB+ RAM)) laptop and deploy some of the example environment and run hello world applications on it.

To do so, I recommend you to use the All-In-One image provided by the OpenShift team at this address: http://www.openshift.org/vm/

You will have to be familiar with Vagrant and and a virtualisation tool like VirtualBox (or bitterly, a Linux Kernel based virtualisation tool like KVM) and your OpenShift 1-machine cluster will be running in minutes.

Step by step

Easy and simple, just perform the following steps. Let’s assume that we are using OSX, but the steps are very similar if using Windows or Linux of course. For convenience also, we will be using VirtualBox which is available on the 3 platforms.

Installing the tools

The required tools are: VirtualBox and Vagrant.

VirtualBox is a available on the virtual box website. Download a 5.0.x version for your platform and take a few minutes also to download the extension pack for your platform. The installation is quite straight forward by using a wizard installer: Next Next Next install.

Vagrant is a command line “script like” based tool used to control VirtualBox using command lines. The script recipe is named a Vagrantfile which will contain the whole logic for creating a Virtual Machine and settings its various configuration elements. Vagrant is also installable with OS specific packages and/or wizard based installer available from the vagrant download page.

Downloading the OpenShift All-In-One files

Visit the OpenShift All-In-One page at http://www.openshift.org/vm/ and you will see there all the materials that we are now ready to use to start our OpenShift cluster. You now all know what are the different tools referring too. So let’s continue by downloading the following elements:

  • The Vagrant Box File 1.0.6, about 2.1GB : This file is a template Virtual Box image containing the base OpenShift VM
  • The Vagrantfile : The vagrant recipe to start and run the OpenShift cluster

Once you have these files, I recommend you to put them in the same directory for example under your home directory:

mkdir -p ~/OpenShift
mv ~/Downloads/Vagrantfile ~/OpenShift
mv ~/Downloads/openshift-bootstrap-1.0.6.box ~/OpenShift

Adding the box image

To enable Vagrant to instantiate virtual machines using the provided .box image, we will have to add it to the Vagrant available boxes.

cd ~/OpenShift
vagrant box add openshift-bootstrap-1.0.6.box

Starting the VM

Before starting the VM, we will just perform a single change on the Vagrantfile, in case you have a slow laptop like mine, to avoid a timeout while starting the VM and add the following line just after config.vm.box = “openshift3”

config.vm.boot_timeout = 600

To start your VM, then simply run the following command:

vagrant up

Wait for a few minutes, and if you want to see progress, you can launch your VirtualBox console and see that a VM named “openshift3” is automatically started and configured.

Connecting to your OpenShift dashboard

Vagrant establishes a port-forward between some ports of the running VM and ports on your localhost. We may have notice this on the log messages.
The openshift master will listen on its localhost interface on port 8443 which will be mapped to your laptop localhost on port 8443 which will be convenient for OpenShift self SSL certificates to be accepted.
To connect to the dashboard, simply visit this address: https://localhost:8443/ . You will be able to see the login screen.


Login using the following credentials:

  • Username: admin
  • Password: admin

And you will then see the list of existing environment and applications:


Deploying your first environment

To start a new project, simply clic on the New Project button on the upper right corner of the screen and select your environment:


Running the first hello-word app

Then, you can add your first application to this project by clicking the Add to project button, and select the template. Here the EAP6 template.