Adding/setting insecure-registry to docker machine afterwards

Running docker on non-Linux based environment became very convenient and easy with docker-machine which is the successor of docker-boot.

Basically, docker-machine allows you to manage multiple virtual machines running Linux to host your docker installation and then allows you to run your containers.
More than a fantastic tool for OSX and Windows, it is also a very clever and practical way to develop multiple container images or several applications (for different project for examples) using containers.

If you want your docker-machine to use an your own in-house registry or any other, it is not a big issue, until the registry uses HTTPS, and in most of the cases you will get the following error:

docker tag -f my-app/my-app-server:v1.0.14-25-gfefb196 dockerhub.rnd.mycompany.net:5000/my-app/my-app-server:v1.0.14-25-gfefb196
docker push dockerhub.rnd.mycompany.net:5000/my-app/my-app-server:v1.0.14-25-gfefb196
The push refers to a repository [dockerhub.rnd.mycompany.net:5000/my-app/my-app-server] (len: 1)
unable to ping registry endpoint https://dockerhub.rnd.mycompany.net:5000/v0/
v2 ping attempt failed with error: Get https://dockerhub.rnd.mycompany.net:5000/v2/: x509: certificate signed by unknown authority
 v1 ping attempt failed
 with error: Get https://dockerhub.rnd.mycompany.net:5000/v1/_ping: x509: certificate signed by unknown authority

For this case, docker-machine has a fantastic option which is available on creation of a machine:

docker-machine create --driver virtualbox --engine-insecure-registry myregistry:5000 mycompany

But, suppose that you want to add another registry once your docker-machine is created: Unfortunately, I can’t find an option yet to edit the existing configuration of a VM.
You will have to edit your configuration file which is located on your host system (your OSX or Windows home) and add it manually:

vim  ~/.docker/machine/machines/mycompany/config.json

Then, you’ll have to edit the config.json file and locate the array named:InsecureRegistry and simply append an element on it.
It should looks like this:

{
  "ConfigVersion": 1,
 // Truncated for readability 
  "DriverName": "virtualbox",
  "HostOptions": {
    "Driver": "",
    "Memory": 0,
    "Disk": 0,
    "EngineOptions": {
      "ArbitraryFlags": [],
      "Dns": null,
      "GraphDir": "",
      "Env": [],
      "Ipv6": false,
      "InsecureRegistry": [
        "dockerhub.rnd.mycompany.net:5002",
        "dockerhub.rnd.mycompany.net:5000",
        "dockerhub.rnd.mycompany.net:5001"
      ],
      // Truncated for readability 
  },
  "StorePath": "/Users/Akram/.docker/machine/machines/mycompany"
}

OpenShift cheat sheet for beginners

Here is a simple cheatsheet for OpenShift beginners that will help you to visualise some basic settings about your projects, applications, pods in order to debug or get informations about how they behave.

Listing all your projects

oc get projects

This will give you the list of all the project that you can work on an highlight the current project.

Positioning the current project

oc project my-project

This will switch you current project to my-project. This settings is save in your ~/.kube/config file, so if multiple persons are using oc simultaneously with the same user, just mind no overriding each other.

Listing the existing pods (applications)

oc get pods

This will list all the pods (a wrapper for containers, even if generally 1 pod = 1 container) and show you status for each of them.

Checking status for pod

oc describe pod 

This will display information about the pod lifecycle: the node on which it has been scheduled, the status of the docker image on the node (image existing or pulling or failed to be pulled), the readiness and liveness status, and if the pod is started or stopped.

Watching pods logs

oc logs -f 

The -f option is for follow, just like for the tail command. This will display the logs sent to stdout from the container.
If the pod has crashed or has stopped, it will be in a state that would prevent seeing logs unless you specify -p (for –previous) option.

oc logs -p 

Watching event on project

oc get events -w

This will show you all the OpenShift events occurring on the current project and keep watching it (-w for –watch). The evens includes scheduling events, pod startup, scheduling, etc…

Hope that this will help every beginner.

Starting with OpenShift v3 : Using the All-In-One Virtual Machine

OpenShift v3 is a PaaS management software relying on innovative technologies allowing you to run your own cloud environment in your private infrastructure, on public clouds or in hybrid way.

To get familiar with OpenShift, the best thing to do, is try to install it on your (muscled (8GB+ RAM)) laptop and deploy some of the example environment and run hello world applications on it.

To do so, I recommend you to use the All-In-One image provided by the OpenShift team at this address: http://www.openshift.org/vm/

You will have to be familiar with Vagrant and and a virtualisation tool like VirtualBox (or bitterly, a Linux Kernel based virtualisation tool like KVM) and your OpenShift 1-machine cluster will be running in minutes.

Step by step

Easy and simple, just perform the following steps. Let’s assume that we are using OSX, but the steps are very similar if using Windows or Linux of course. For convenience also, we will be using VirtualBox which is available on the 3 platforms.

Installing the tools

The required tools are: VirtualBox and Vagrant.

VirtualBox is a available on the virtual box website. Download a 5.0.x version for your platform and take a few minutes also to download the extension pack for your platform. The installation is quite straight forward by using a wizard installer: Next Next Next install.

Vagrant is a command line “script like” based tool used to control VirtualBox using command lines. The script recipe is named a Vagrantfile which will contain the whole logic for creating a Virtual Machine and settings its various configuration elements. Vagrant is also installable with OS specific packages and/or wizard based installer available from the vagrant download page.

Downloading the OpenShift All-In-One files

Visit the OpenShift All-In-One page at http://www.openshift.org/vm/ and you will see there all the materials that we are now ready to use to start our OpenShift cluster. You now all know what are the different tools referring too. So let’s continue by downloading the following elements:

  • The Vagrant Box File 1.0.6, about 2.1GB : This file is a template Virtual Box image containing the base OpenShift VM
  • The Vagrantfile : The vagrant recipe to start and run the OpenShift cluster

Once you have these files, I recommend you to put them in the same directory for example under your home directory:

mkdir -p ~/OpenShift
mv ~/Downloads/Vagrantfile ~/OpenShift
mv ~/Downloads/openshift-bootstrap-1.0.6.box ~/OpenShift

Adding the box image

To enable Vagrant to instantiate virtual machines using the provided .box image, we will have to add it to the Vagrant available boxes.

cd ~/OpenShift
vagrant box add openshift-bootstrap-1.0.6.box

Starting the VM

Before starting the VM, we will just perform a single change on the Vagrantfile, in case you have a slow laptop like mine, to avoid a timeout while starting the VM and add the following line just after config.vm.box = “openshift3”

config.vm.boot_timeout = 600

To start your VM, then simply run the following command:

vagrant up

Wait for a few minutes, and if you want to see progress, you can launch your VirtualBox console and see that a VM named “openshift3” is automatically started and configured.

Connecting to your OpenShift dashboard

Vagrant establishes a port-forward between some ports of the running VM and ports on your localhost. We may have notice this on the log messages.
The openshift master will listen on its localhost interface on port 8443 which will be mapped to your laptop localhost on port 8443 which will be convenient for OpenShift self SSL certificates to be accepted.
To connect to the dashboard, simply visit this address: https://localhost:8443/ . You will be able to see the login screen.

OpenShiftDashboardLogin

Login using the following credentials:

  • Username: admin
  • Password: admin

And you will then see the list of existing environment and applications:

OpenShiftApps

Deploying your first environment

To start a new project, simply clic on the New Project button on the upper right corner of the screen and select your environment:

OpenShiftAddProject

Running the first hello-word app

Then, you can add your first application to this project by clicking the Add to project button, and select the template. Here the EAP6 template.

OpenShiftEAPApp

 

My wordpress blog migration to OpenShift Online

After almost a year working with a custom domain on wordpress.com, the world leader platform for blogging is asking me for a renewal of my domain name and service which is quite expensive baed on the sporadic usage I do of my blog and the traffic that I have.

Anyway, that was a good opportunity to perform my blog migration to the OpenShift platform which is now very mature to host such projects and gives you the ability through the WordPress cartridge to have your on private and administrable installation of WordPress running on the cloud.

I was already running a trial version of OpenShift 2.0 online which gave me the ability to run 3 gears and I already used 2 for other private project. So this, trial instance would be perfect to host my blog.

If you are in the same situation here are some steps to follow if you want to migration a Worpdress.com blog to Openshift.

  • Create your OpenShift Online (v2) wordpress environment
  • Choose a DNS registrar which supports having CNAME on your domain level (preferable)
  • Export your last wordpress site after having install the WordPress Export plugin
  • Install your new wordpress site on OpenShift
  • Import the result of the export of the old site (images will be imported automatically, so be sure that the old site is still up, running and alive)
  • Edit your domain name in OpenShift to point to your DNS name
  • Edit your DNS zone to add a CNAME pointing to the openshift URL of your blog
  • And you are done !

 

Enable IPv6 in Pidora

I was disappointed to see that so poor documentation exists on how to enable IPv6 on a RaspberryPi running Pidora (the Fedora version for the RPi).

After hanging here and out, I finally simply managed to run it using NetworkManager configuration.
Since my RPi is a headless (no display machine), I simply installed XQuartz on my Mac laptop and enabled X11 Forwarding on my SSH session.

By default, the sshd server on RPi does not enable it, so you have to add the following lines to /etc/ssh/sshd_config:

X11Forwarding yes
X11UseLocalhost no

Then restart the service:

# service sshd restart

And on your laptop connect by ssh with the following options:

ssh -Y root@pidora.local

Then launch nm-connection-editor and edit eth0 network connection and go to IPv6 tab. Simply select the configuration to Automatic instead of Ignore which is default.

# nm-connection-editor

nm1 nm2

 

Locked out of your Mac : A few tips and tools

I was locked out of my mac for a stupid reason: I installed and downloaded the VPN Server Enabler, and when configuring a user to connect, I chose my own user.
The bad thing is that VPN Server Enabler changes the shell to false and the home to some private empty dir. If you set a password for the VPN user it will also change your own user’s password….ha ha

Well, the surprise happened this morning when trying to login while in the plane. After being logged in, I was automatically redirect to the Locked Screen asking my password again and again.

I rapidly understood that something changed with my user.

Single User Mode: cmd + S

It is documented everywhere, so the first thing to do is to restart your computer and hold cmd + S key while logging in. It is supposed to give you a unix shell, but for me it did not: My hard disk is encrypted and I only realised that the login screen that OSX was presenting me after the cmd+s boot sequence was to type the encryption key.
So first tip, if your disk is encrypted, you will have to type your encryption passphrase before gaining access to a single user shell.

Reading my users details

Here, you will have to use the dscl command. To show your user info, which are quite a large file, because it also contains the base64 encoding of your JPEG avatar, just type:
dscl . read /Users/YourUser

After some lines of text, you will see something like

RecordType: dsRecTypeStandard:Users
UniqueID: 501
UserShell: /bin/false

Bingo, my shell has been changed. Restoring it to a more viable thing requires the use of the chsh command:

chsh MyUser -s /bin/bash 

And then, everything was fine I was able to login again.

A few more changes to users

Returning back to my OSX, I created a rescue user just in case. It’s like letting the rescue keys to your relatives. Always a good idea.

And then, I realised that MyUser was much more altered: The home directory and the full name was changed. Easy here, the Users & Groups menus from OSX allows changing this by “right” clicking on the user and selecting Advanced Options

Hope This Helps

Posted in Mac

Have multiple configuration files for HAProxy

This post is an extract from an answer to a question from a mailing list or on stack overflow, I was so hurry to not loose it, that I copied-pasted it here, in case I lost the link.

It seems to be Michael Bibl from the Debian team:

http://permalink.gmane.org/gmane.comp.web.haproxy/17603

 

To recreate the same structure and functionality as the Apache2-style as we also wanted the easier management.

Below shows the directory structure and files we modified to make this happen.

Modified /etc/init.d/haproxy:

EXTRAOPTS=`for FILE in `find /etc/haproxy/sites-enabled -type l | sort -n`; do CONFIGS="$CONFIGS -f $FILE"; done; echo $CONFIGS`

Directory structure:

/etc/haproxy
├── errors
│ ├── 400.http
│ ├── 403.http
│ ├── 408.http
│ ├── 500.http
│ ├── 502.http
│ ├── 503.http
│ └── 504.http
├── haproxy.cfg
├── haproxy.cfg.bak
├── haproxy.cfg.bak-2014-03-21
├── sites-available
│ ├── site01.example.com
│ ├── site02.example.com
│ └── site03.exmaple.com
└── sites-enabled
├── site01.example.com -> ../sites-available/site01.example.com
├── site02.example.com -> ../sites-available/site02.example.com
└── site03.example.com -> ../sites-available/site03.example.com

Created haensite:

#!/bin/bash

if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
fi

if [ $# -lt 1 ]; then
echo "Invalid number of arguments"
exit 1
fi

echo "Enabling $1..."

cd /etc/haproxy/sites-enabled
ln -s ../sites-available/$1 ./

echo "To activate the new configuration, you need to run:"
echo " /etc/init.d/haproxy restart"

Created hadissite:

#!/bin/bash

if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
fi

if [ $# -lt 1 ]; then
echo "Invalid number of arguments"
exit 1
fi

echo "Disabling $1..."

rm -f /etc/haproxy/sites-enabled/$1

echo "To activate the new configuration, you need to run:"
echo " /etc/init.d/haproxy restart"

Simple Application Server’s Concepts People Forgot

As a consultant, I often face the same issues and questions from customers, and with the emergence of DevOps practices, I often see people arguing on some old and proofed principles, probably only because we, as human, forget and are trying to re-invent the wheel.

Java EE and the Application Servers world is no exception, so here is the top three problems I always have to re-explain when dealing with Operations and Developers issues.

Rely on resources provided by application server

Java EE is an extension of Java. And as such, some of the people who created Java participated in elaborating Java EE specification, with the goal in mind to provide mechanisms to solve the current issues in the Enterprise world. This lead to put in application some of the principles behind OO programming and Java programming as well:
– Separate concerns: Segregate responsibilities, do one thing but do it well, try to work as service, hide the complexity but provide powerful services.
– Increase Reusability: Some skilled people will do job for you, and they were selected to do it well, so use what they do, use the APIs, and focus on doing your stuff better.

As you can see, this sounds like DevOps concepts, if you have a double reading you can see that these principles both apply to code, and to human organizations in IT where people wants to build, deploy and operate a piece of software in the most efficient, collaborative and secure manner.

This is how I have been taught Java Programming since Java 1.1, and I still think that Java, and as a consequence Java EE is built this way. Of course it has defect, after all, perfection is not human, and this is another debate.

The datasources

To return to our subject, I remember that of the toughest issue I always facing is often related to datasources configuration. To go back to the basics, you should remember that in a Java EE Application Server a lot of resources can be registered in a directory called JNDI. It allows use of resources in a distributed manner, but, and this is the most important thing, it allows to decorelate a physical resource from its name. Just like the DNS. And I think that, today, nobody will prefer using static IP addresses instead of name servers?

The guys behind the concept of JNDI, by first applying it to datasources, knows that one of the most important interaction between an application server and the rest of the Information System is often related to connect to a source of data. Java EE rapidly came with a solution to that, including a way to easily pass from an environment to another: Simply use a name indirection and let the Application Server administrator (the Operations guy, or the Ops) configure it in a transparent manner for application that needs data from this source.

So, I often get very confused when I hear people saying that they don’t want to rely on such a feature because they don’t want to wait after the Ops guy to do the job, just like if the application will run on production from the developer’s laptop.

Moreover, not only providing an indirection name to the “welcome to run” application, the application server also manages the resource life cycle, using pools or other functionalities (failover, hiding passwords, providing drivers, etc…), so many features that the developer would have to handle if they were not provided.

The modules

In the first generations of Java EE applications servers, things where quite monolithic. This was only related to choices of implementations and internal architecture of application servers, and not related (not entirely true) to EE specifications. A few generations later, JBoss AS 7 and some others (Geronimo, Glassfish, etc…) introduced modularity in their architecture to allow better class loading isolation (on of the most important issues in the first application servers, probably because this part was underspecified in Java EE) as well as improving performance and allowing better reusability.

To apply the “eat your own dog food” principle, JBoss AS 7 relies on its own modular class loader (called JBoss modules) to load its core components or its dependent librairies and provides at the same time a convenient way for applications that may require it, a framework for sharing their libraries or to rely on application servers library to result in a lower footprint and a higher reuse. Quite academic.

And again, the configuration of datasources is not an exception.

One more thing to understand, in AS7/WildFly8+ a module is not a deployment, but a deployment, once deployed, is visible as a module. So people are often mistaken and makes confusion between these two concepts.

So Why Should I Use Modules For My Datasource Driver ?

If we compile all these previous arguments together, we can clearly understand that a datasource is a resource provided by the application server in a named manner abstracting its physical location. At the end, its goal is to facilitate the deployment of an application in all the environments with no modifications of the delivery. When doing DevOps, I should put my Ops cap, remember that principle, and start configuring it as such.

So then, I have two options: install the datasource driver as a deployment by copying it in the $JBOSS_HOME/deployments directory. As stated before, once a deployment is deployed, it is seen as a module. The JBoss guys provided this feature for compatibility reasons and because this quite easy for a Dev to put the driver there, add a few lines of XML in the AS server and have its datasource ready, testing it.

But, in real life, when we go production, the developer does not have to bother to manage this, someone is doing it, in a transparent manner. Hence, the resource is part of the configuration of the application server and does not follow the deployment lifecycle, it follows the AS configuration lifceycle. As such, it is preferable that it relies on resources (classes, librairies, etc…) available in the application server and not deployed on top of the application server. Said with other words, if the Ops have to wait that the Dev deploys the JDBC driver, we will end-up in a dead lock.

An extra simple argument, this also simply allows to manage credentials for datasources from a limited number of people, and not putting them in files inside the Source Code Management system or worse on github.

If you see it now this way, you should understand that the driver has its seat in the modules directories, but of course you have the choice, even when wearing your Ops cap, to put the driver’s in deployments. But we do not recommend it.

I hope that these few arguments convinced the most skeptical of you.

 

Know What To Log Not How To Log

TODO

This is about the subsystem logging in JBoss

Use The Tools Provided By The Platform

TODO

This is about the domain mode in JBoss

 

 

Ukelele ou comment avoir les chiffres en direct sur un clavier AZERTY sur un mac

Ukelele ou comment avoir les chiffres en direct sur un clavier AZERTY sur un mac

Cette astuce va certainement ravir la plupart d’entre vous qui trouvent que le layout par défaut des claviers AZERTY est une abomination. Le mot est un peu fort, mais soyons honnêtes, l’utilisation du clavier standard français est une sorte de perversion dactylographique.
La bonne nouvelle c’est que si vous avez MacOSX tout n’est pas perdu car une solution simple existe pour réparer cela avec en sus un peu de gymnastique intellectuelle quotidienne qui vous rendra à terme la vie plus heureuse 🙂
L’astuce est simple: MacOSX propose depuis Leopard la possibilité de modifier les Input Layouts en positionnant différents type de clavier possibles à travers le menu “Language & Region”, sur OSX en Français ce doit être ????
syspref
Vous avez alors la possibilité de choisir entre plusieurs dispositions de claviers. Et l’ultime astuce réside dans le fait qu’il est même possible d’importer ses propres layout de clavier, et qu’il existe un outils graphique pour générer les fichiers de layout: Ukulele : http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&item_id=ukelele
La doc de Ukulele est très complète et explique comment générer de tels fichiers. Pour ma part, je me suis intéressé à la création d’un layout permettant de faire comme si les chiffres étaient en accès direct. Le fichier est dispo sur mon compte github https://github.com/akram/french-direct-numbers-keylayout, pour l’installer, il suffit de le copier dans /System/Library/Keyboard Layouts/ et de redémarrer Finder, puis de se rendre dans le menu Keyboard et voilà, le fichier de disposition French Direct Numbers vous permet d’avoir accès aux chiffres du clavier Mac sans utiliser la touche majuscule.
keyb
Posted in Mac