Starting with OpenShift v3 : Using the All-In-One Virtual Machine

OpenShift v3 is a PaaS management software relying on innovative technologies allowing you to run your own cloud environment in your private infrastructure, on public clouds or in hybrid way.

To get familiar with OpenShift, the best thing to do, is try to install it on your (muscled (8GB+ RAM)) laptop and deploy some of the example environment and run hello world applications on it.

To do so, I recommend you to use the All-In-One image provided by the OpenShift team at this address: http://www.openshift.org/vm/

You will have to be familiar with Vagrant and and a virtualisation tool like VirtualBox (or bitterly, a Linux Kernel based virtualisation tool like KVM) and your OpenShift 1-machine cluster will be running in minutes.

Step by step

Easy and simple, just perform the following steps. Let’s assume that we are using OSX, but the steps are very similar if using Windows or Linux of course. For convenience also, we will be using VirtualBox which is available on the 3 platforms.

Installing the tools

The required tools are: VirtualBox and Vagrant.

VirtualBox is a available on the virtual box website. Download a 5.0.x version for your platform and take a few minutes also to download the extension pack for your platform. The installation is quite straight forward by using a wizard installer: Next Next Next install.

Vagrant is a command line “script like” based tool used to control VirtualBox using command lines. The script recipe is named a Vagrantfile which will contain the whole logic for creating a Virtual Machine and settings its various configuration elements. Vagrant is also installable with OS specific packages and/or wizard based installer available from the vagrant download page.

Downloading the OpenShift All-In-One files

Visit the OpenShift All-In-One page at http://www.openshift.org/vm/ and you will see there all the materials that we are now ready to use to start our OpenShift cluster. You now all know what are the different tools referring too. So let’s continue by downloading the following elements:

  • The Vagrant Box File 1.0.6, about 2.1GB : This file is a template Virtual Box image containing the base OpenShift VM
  • The Vagrantfile : The vagrant recipe to start and run the OpenShift cluster

Once you have these files, I recommend you to put them in the same directory for example under your home directory:

mkdir -p ~/OpenShift
mv ~/Downloads/Vagrantfile ~/OpenShift
mv ~/Downloads/openshift-bootstrap-1.0.6.box ~/OpenShift

Adding the box image

To enable Vagrant to instantiate virtual machines using the provided .box image, we will have to add it to the Vagrant available boxes.

cd ~/OpenShift
vagrant box add openshift-bootstrap-1.0.6.box

Starting the VM

Before starting the VM, we will just perform a single change on the Vagrantfile, in case you have a slow laptop like mine, to avoid a timeout while starting the VM and add the following line just after config.vm.box = “openshift3”

config.vm.boot_timeout = 600

To start your VM, then simply run the following command:

vagrant up

Wait for a few minutes, and if you want to see progress, you can launch your VirtualBox console and see that a VM named “openshift3” is automatically started and configured.

Connecting to your OpenShift dashboard

Vagrant establishes a port-forward between some ports of the running VM and ports on your localhost. We may have notice this on the log messages.
The openshift master will listen on its localhost interface on port 8443 which will be mapped to your laptop localhost on port 8443 which will be convenient for OpenShift self SSL certificates to be accepted.
To connect to the dashboard, simply visit this address: https://localhost:8443/ . You will be able to see the login screen.

OpenShiftDashboardLogin

Login using the following credentials:

  • Username: admin
  • Password: admin

And you will then see the list of existing environment and applications:

OpenShiftApps

Deploying your first environment

To start a new project, simply clic on the New Project button on the upper right corner of the screen and select your environment:

OpenShiftAddProject

Running the first hello-word app

Then, you can add your first application to this project by clicking the Add to project button, and select the template. Here the EAP6 template.

OpenShiftEAPApp

 

My wordpress blog migration to OpenShift Online

After almost a year working with a custom domain on wordpress.com, the world leader platform for blogging is asking me for a renewal of my domain name and service which is quite expensive baed on the sporadic usage I do of my blog and the traffic that I have.

Anyway, that was a good opportunity to perform my blog migration to the OpenShift platform which is now very mature to host such projects and gives you the ability through the WordPress cartridge to have your on private and administrable installation of WordPress running on the cloud.

I was already running a trial version of OpenShift 2.0 online which gave me the ability to run 3 gears and I already used 2 for other private project. So this, trial instance would be perfect to host my blog.

If you are in the same situation here are some steps to follow if you want to migration a Worpdress.com blog to Openshift.

  • Create your OpenShift Online (v2) wordpress environment
  • Choose a DNS registrar which supports having CNAME on your domain level (preferable)
  • Export your last wordpress site after having install the WordPress Export plugin
  • Install your new wordpress site on OpenShift
  • Import the result of the export of the old site (images will be imported automatically, so be sure that the old site is still up, running and alive)
  • Edit your domain name in OpenShift to point to your DNS name
  • Edit your DNS zone to add a CNAME pointing to the openshift URL of your blog
  • And you are done !

 

Enable IPv6 in Pidora

I was disappointed to see that so poor documentation exists on how to enable IPv6 on a RaspberryPi running Pidora (the Fedora version for the RPi).

After hanging here and out, I finally simply managed to run it using NetworkManager configuration.
Since my RPi is a headless (no display machine), I simply installed XQuartz on my Mac laptop and enabled X11 Forwarding on my SSH session.

By default, the sshd server on RPi does not enable it, so you have to add the following lines to /etc/ssh/sshd_config:

X11Forwarding yes
X11UseLocalhost no

Then restart the service:

# service sshd restart

And on your laptop connect by ssh with the following options:

ssh -Y root@pidora.local

Then launch nm-connection-editor and edit eth0 network connection and go to IPv6 tab. Simply select the configuration to Automatic instead of Ignore which is default.

# nm-connection-editor

nm1 nm2

 

Locked out of your Mac : A few tips and tools

I was locked out of my mac for a stupid reason: I installed and downloaded the VPN Server Enabler, and when configuring a user to connect, I chose my own user.
The bad thing is that VPN Server Enabler changes the shell to false and the home to some private empty dir. If you set a password for the VPN user it will also change your own user’s password….ha ha

Well, the surprise happened this morning when trying to login while in the plane. After being logged in, I was automatically redirect to the Locked Screen asking my password again and again.

I rapidly understood that something changed with my user.

Single User Mode: cmd + S

It is documented everywhere, so the first thing to do is to restart your computer and hold cmd + S key while logging in. It is supposed to give you a unix shell, but for me it did not: My hard disk is encrypted and I only realised that the login screen that OSX was presenting me after the cmd+s boot sequence was to type the encryption key.
So first tip, if your disk is encrypted, you will have to type your encryption passphrase before gaining access to a single user shell.

Reading my users details

Here, you will have to use the dscl command. To show your user info, which are quite a large file, because it also contains the base64 encoding of your JPEG avatar, just type:
dscl . read /Users/YourUser

After some lines of text, you will see something like

RecordType: dsRecTypeStandard:Users
UniqueID: 501
UserShell: /bin/false

Bingo, my shell has been changed. Restoring it to a more viable thing requires the use of the chsh command:

chsh MyUser -s /bin/bash 

And then, everything was fine I was able to login again.

A few more changes to users

Returning back to my OSX, I created a rescue user just in case. It’s like letting the rescue keys to your relatives. Always a good idea.

And then, I realised that MyUser was much more altered: The home directory and the full name was changed. Easy here, the Users & Groups menus from OSX allows changing this by “right” clicking on the user and selecting Advanced Options

Hope This Helps

Have multiple configuration files for HAProxy

This post is an extract from an answer to a question from a mailing list or on stack overflow, I was so hurry to not loose it, that I copied-pasted it here, in case I lost the link.

It seems to be Michael Bibl from the Debian team:

http://permalink.gmane.org/gmane.comp.web.haproxy/17603

 

To recreate the same structure and functionality as the Apache2-style as we also wanted the easier management.

Below shows the directory structure and files we modified to make this happen.

Modified /etc/init.d/haproxy:

EXTRAOPTS=`for FILE in `find /etc/haproxy/sites-enabled -type l | sort -n`; do CONFIGS="$CONFIGS -f $FILE"; done; echo $CONFIGS`

Directory structure:

/etc/haproxy
├── errors
│ ├── 400.http
│ ├── 403.http
│ ├── 408.http
│ ├── 500.http
│ ├── 502.http
│ ├── 503.http
│ └── 504.http
├── haproxy.cfg
├── haproxy.cfg.bak
├── haproxy.cfg.bak-2014-03-21
├── sites-available
│ ├── site01.example.com
│ ├── site02.example.com
│ └── site03.exmaple.com
└── sites-enabled
├── site01.example.com -> ../sites-available/site01.example.com
├── site02.example.com -> ../sites-available/site02.example.com
└── site03.example.com -> ../sites-available/site03.example.com

Created haensite:

#!/bin/bash

if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
fi

if [ $# -lt 1 ]; then
echo "Invalid number of arguments"
exit 1
fi

echo "Enabling $1..."

cd /etc/haproxy/sites-enabled
ln -s ../sites-available/$1 ./

echo "To activate the new configuration, you need to run:"
echo " /etc/init.d/haproxy restart"

Created hadissite:

#!/bin/bash

if [[ $EUID -ne 0 ]]; then
echo "You must be a root user" 2>&1
exit 1
fi

if [ $# -lt 1 ]; then
echo "Invalid number of arguments"
exit 1
fi

echo "Disabling $1..."

rm -f /etc/haproxy/sites-enabled/$1

echo "To activate the new configuration, you need to run:"
echo " /etc/init.d/haproxy restart"

Simple Application Server’s Concepts People Forgot

As a consultant, I often face the same issues and questions from customers, and with the emergence of DevOps practices, I often see people arguing on some old and proofed principles, probably only because we, as human, forget and are trying to re-invent the wheel.

Java EE and the Application Servers world is no exception, so here is the top three problems I always have to re-explain when dealing with Operations and Developers issues.

Rely on resources provided by application server

Java EE is an extension of Java. And as such, some of the people who created Java participated in elaborating Java EE specification, with the goal in mind to provide mechanisms to solve the current issues in the Enterprise world. This lead to put in application some of the principles behind OO programming and Java programming as well:
– Separate concerns: Segregate responsibilities, do one thing but do it well, try to work as service, hide the complexity but provide powerful services.
– Increase Reusability: Some skilled people will do job for you, and they were selected to do it well, so use what they do, use the APIs, and focus on doing your stuff better.

As you can see, this sounds like DevOps concepts, if you have a double reading you can see that these principles both apply to code, and to human organizations in IT where people wants to build, deploy and operate a piece of software in the most efficient, collaborative and secure manner.

This is how I have been taught Java Programming since Java 1.1, and I still think that Java, and as a consequence Java EE is built this way. Of course it has defect, after all, perfection is not human, and this is another debate.

The datasources

To return to our subject, I remember that of the toughest issue I always facing is often related to datasources configuration. To go back to the basics, you should remember that in a Java EE Application Server a lot of resources can be registered in a directory called JNDI. It allows use of resources in a distributed manner, but, and this is the most important thing, it allows to decorelate a physical resource from its name. Just like the DNS. And I think that, today, nobody will prefer using static IP addresses instead of name servers?

The guys behind the concept of JNDI, by first applying it to datasources, knows that one of the most important interaction between an application server and the rest of the Information System is often related to connect to a source of data. Java EE rapidly came with a solution to that, including a way to easily pass from an environment to another: Simply use a name indirection and let the Application Server administrator (the Operations guy, or the Ops) configure it in a transparent manner for application that needs data from this source.

So, I often get very confused when I hear people saying that they don’t want to rely on such a feature because they don’t want to wait after the Ops guy to do the job, just like if the application will run on production from the developer’s laptop.

Moreover, not only providing an indirection name to the “welcome to run” application, the application server also manages the resource life cycle, using pools or other functionalities (failover, hiding passwords, providing drivers, etc…), so many features that the developer would have to handle if they were not provided.

The modules

In the first generations of Java EE applications servers, things where quite monolithic. This was only related to choices of implementations and internal architecture of application servers, and not related (not entirely true) to EE specifications. A few generations later, JBoss AS 7 and some others (Geronimo, Glassfish, etc…) introduced modularity in their architecture to allow better class loading isolation (on of the most important issues in the first application servers, probably because this part was underspecified in Java EE) as well as improving performance and allowing better reusability.

To apply the “eat your own dog food” principle, JBoss AS 7 relies on its own modular class loader (called JBoss modules) to load its core components or its dependent librairies and provides at the same time a convenient way for applications that may require it, a framework for sharing their libraries or to rely on application servers library to result in a lower footprint and a higher reuse. Quite academic.

And again, the configuration of datasources is not an exception.

One more thing to understand, in AS7/WildFly8+ a module is not a deployment, but a deployment, once deployed, is visible as a module. So people are often mistaken and makes confusion between these two concepts.

So Why Should I Use Modules For My Datasource Driver ?

If we compile all these previous arguments together, we can clearly understand that a datasource is a resource provided by the application server in a named manner abstracting its physical location. At the end, its goal is to facilitate the deployment of an application in all the environments with no modifications of the delivery. When doing DevOps, I should put my Ops cap, remember that principle, and start configuring it as such.

So then, I have two options: install the datasource driver as a deployment by copying it in the $JBOSS_HOME/deployments directory. As stated before, once a deployment is deployed, it is seen as a module. The JBoss guys provided this feature for compatibility reasons and because this quite easy for a Dev to put the driver there, add a few lines of XML in the AS server and have its datasource ready, testing it.

But, in real life, when we go production, the developer does not have to bother to manage this, someone is doing it, in a transparent manner. Hence, the resource is part of the configuration of the application server and does not follow the deployment lifecycle, it follows the AS configuration lifceycle. As such, it is preferable that it relies on resources (classes, librairies, etc…) available in the application server and not deployed on top of the application server. Said with other words, if the Ops have to wait that the Dev deploys the JDBC driver, we will end-up in a dead lock.

An extra simple argument, this also simply allows to manage credentials for datasources from a limited number of people, and not putting them in files inside the Source Code Management system or worse on github.

If you see it now this way, you should understand that the driver has its seat in the modules directories, but of course you have the choice, even when wearing your Ops cap, to put the driver’s in deployments. But we do not recommend it.

I hope that these few arguments convinced the most skeptical of you.

 

Know What To Log Not How To Log

TODO

This is about the subsystem logging in JBoss

Use The Tools Provided By The Platform

TODO

This is about the domain mode in JBoss

 

 

Ukelele ou comment avoir les chiffres en direct sur un clavier AZERTY sur un mac

Ukelele ou comment avoir les chiffres en direct sur un clavier AZERTY sur un mac

Cette astuce va certainement ravir la plupart d’entre vous qui trouvent que le layout par défaut des claviers AZERTY est une abomination. Le mot est un peu fort, mais soyons honnêtes, l’utilisation du clavier standard français est une sorte de perversion dactylographique.
La bonne nouvelle c’est que si vous avez MacOSX tout n’est pas perdu car une solution simple existe pour réparer cela avec en sus un peu de gymnastique intellectuelle quotidienne qui vous rendra à terme la vie plus heureuse 🙂
L’astuce est simple: MacOSX propose depuis Leopard la possibilité de modifier les Input Layouts en positionnant différents type de clavier possibles à travers le menu “Language & Region”, sur OSX en Français ce doit être ????
syspref
Vous avez alors la possibilité de choisir entre plusieurs dispositions de claviers. Et l’ultime astuce réside dans le fait qu’il est même possible d’importer ses propres layout de clavier, et qu’il existe un outils graphique pour générer les fichiers de layout: Ukulele : http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&item_id=ukelele
La doc de Ukulele est très complète et explique comment générer de tels fichiers. Pour ma part, je me suis intéressé à la création d’un layout permettant de faire comme si les chiffres étaient en accès direct. Le fichier est dispo sur mon compte github https://github.com/akram/french-direct-numbers-keylayout, pour l’installer, il suffit de le copier dans /System/Library/Keyboard Layouts/ et de redémarrer Finder, puis de se rendre dans le menu Keyboard et voilà, le fichier de disposition French Direct Numbers vous permet d’avoir accès aux chiffres du clavier Mac sans utiliser la touche majuscule.
keyb

Binding an URL in AS7 JNDI tree

AS7 provides the JNDI functionnaly through the naming subsystem. If you take a look at the corresponding schema ($AS7_HOME/docs/schema/jboss-as-naming_1_1.xsd, you will see that its configuration has only a few options.

What does this XML schema description says ? It says that the configuraiton of the naming subsystem is composed of “binding” elements. Each of this element can be:

  • A simple type: Basically, these are the common number types (int, long, BigDecimal, etc…) or String.
  • a lookup type: This only a kind of JNDI name alias. Which you can use to have two different names for the same resource
  • An object-factory type: A class which implements javax.naming.spi.ObjectFactory, instantiated once per declared resource and responsible of the instantiation of a custom object.

As you can see, the simple type is quite limited, but I hope that this may evolve depending on needs. So, our last chance to register custom types is to use the object-factory.

Create an URLResourceFactory

To avoid creating one factory class every time you need to bind one URL, the factory will get the value attached to a system property having the same name as the JNDI resource to create URL.Here is what the ResourceURLFactory may look like (some additional checks may help) :

package org.akram.factory;
public class ResourceURLFactory implements ObjectFactory {
  public Object getObjectInstance(Object object, Name name, Context nameCtx,
                                  Hashtable<?,?> environment) throws Exception {
    String urlAsString = System.getProperty(object.toString());
    URL url = new URI(urlAsString).toURL();
    return url;
  }
}

Add it as a module in JBoss

Then, you need to package this class in a jar and add it as a module in AS7:

mvn install
mkdir -p $AS7_HOME/modules/org/akram/factory/main
cp target/url-resource-factory.jar $AS7_HOME/modules/org/akram/factory/main
vi modules/org/akram/factory/main/module.xml

The content of the module.xml file must be the following lines. The dependency to javax.api is required cause the classes of the jar uses this API, so it has to be loaded otherwise, you gill get ClassNotFoundExceptions.

<module xmlns="urn:jboss:module:1.1" name="org.akram.factory">
 <resources>
  <resource-root path="resource-url-factory.jar"/>
 </resources>
 <dependencies>
  <module name="javax.api" />
 </dependencies>
</module>

Bind a new resource using this object-factory

The server can be started now. And, you can try adding the new JNDI binding and a system property with the CLI:

$AS7_HOME/bin/jboss-cli.sh
connect
/subsystem=naming/binding=java:/jboss/exported/myurl:add(binding-type=object-factory, module=org.akram,factory, class=.org.akram.factory.ResourceURLFactory)
/system-property=java:/jboss/exported/myurl:add(value=http://www.myurl.org)

From now, every lookup to java:/jboss/exported/myurl will return a java.net.URL object pointing to http://www.myurl.org.

Disabling session replication in JBoss AS5/EAP5

Today is the official annoucement of JBoss EAP6 (based on AS7.1), so I was thinkig that it was a good day to write a blog on AS5/EAP5.
Probably, not the most read article, but it will probably help someone…

Why would you disable session replication ?

Believe it or not, but everything has a cost. My grand-mother used to say: “Only the scorpion gives for free”. And this is of course the case for session replication. It has a cost that could be adjusted using several techniques: Buddy replication, change granularity replication or synchronous/asynchronous replication mode.
In some rare cases, because your organisation is not ready yet, because your applications does not supports it, or because you simply don’t want it, you may want to disable session replication, while still having other clustering features available on JBoss like automatic cluster configuration, farm deployment, HA JNDI, HA Singleton, etc…

The easy way: Do not set your application distributable

The easies way to disable (HTTP and statefull session beans) sessions replication is simply to NOT set the tag in your web.xml file.

<!-- <distributable /> -->

The good point of this solution is that your JBoss configuration will remain untouched and fully standard. However, you are not protected against an application having a distributable tag. This of course will trigger session replication and that may have an unexpected impact on your overall cluster performance

The efficient way: Disable session replication on JBoss AS 5

To prevent JBoss from replicating sessions whatever the deployed application, you have to modify the way that JBossCache replicates HTTP sessions and SFSB Sessions. To do so, just edit the file $JBOSS_HOME/server//deploy/cluster/jboss-cache-manager.sar/META-INF/jboss-cache-manager-jboss-beans.xml and set the cacheMode parameter to LOCAL for the caches named StandardSessionCacheConfig, FieldSessionCacheConfig and StandardSFSBCacheConfig.

<bean name="StandardSessionCacheConfig" class="org.jboss.cache.config.Configuration">
  <!-- .... Some other parameters .... -->
  <property name="cacheMode">LOCAL</property>
  <!-- .... Some other parameters .... -->
</bean>

The default value for these parameters are: REPL_ASYNC which mean that the cache replication is triggered and does not wait synchronously for the cache write confirmation.
The LOCAL value prevents the replication message to be sent and this blocks session replication when set on the relevant caches configuration.