Ubuntu 15.04 and Handbrake

Posted: May 2, 2015 in Ubuntu
Tags: , , ,

And again, when watching DVDs does not work on Ubuntu 15.04, you must install the restricted codecs (ubuntu-restricted-extras). After that you must install libdvdcss2 which is not in the repositories (for some strange reason). Even though I installed it in 14.10, after the upgrade it was not installed anymore. You have to run the following command on a terminal:

$ sudo /usr/share/doc/libdvdread4/install-css.sh

This fixed Handbrake for me once again.

Ubuntu 15.04 and Docker 1.6.0

Posted: April 27, 2015 in Docker, Ubuntu
Tags: , , ,

When you upgrade to Ubuntu 15.04 and install docker using the get.docker.io script, you’ll probably run into an error like “Are you trying to connect to a TLS-enabled daemon without TLS?”.

Ubuntu 15.04 changed to systemd and you should enable docker with this command:

$ systemctl enable docker

Restart your computer, and it should be working.

OpenVPN on 14.10

Posted: March 23, 2015 in Ubuntu
Tags: ,

When you run into the issue that the NetworkManager is not starting your connection (nothing happens when you click on the VPN connection in the applet), you’ll see the following line in the syslog:

No agents were available for this request

When you start the connection from the CLI, you get the following error

$ sudo nmcli -p con up id YourConnection

Error: Connection activation failed: no valid VPN secrets.

Now, there are solutions available for password connection, but none when you work with certificates. You can use a workaround by editing the configuration file yourself. You can find it here:

/etc/NetworkManager/system-connections/YourConnection

Open the file and add or overwrite the following lines:

[vpn]

service-type=org.freedesktop.NetworkManager.openvpn
connection-type=tls
username=YourName

auth=SHA1
remote= ######
cipher=AES-256-CBC
comp-lzo=yes
cert-pass-flags=0
port= ######
cert=path-to-your.p12
ca=path-to-your.p12
key=path-to-your.p12

[vpn-secrets]
cert-pass=YourPasswordForThisConnection

Restart your command, and VPN should be up and running.

Easy streaming in Vertx.io

Posted: March 9, 2015 in Java
Tags: ,

If you want to stream an incoming request, you can use the following code:

    final Handler<HttpServerRequest> streamer = (HttpServerRequest request) -> {
        HttpServerResponse response = request.response();

        container.logger().info("Streaming... ");
        long ts = System.currentTimeMillis();

        // handle the content
        request.dataHandler((Buffer data) -> {
            container.logger().info("Received " + data.length());

        });

        request.endHandler(v -> {
            container.logger().info("Done! " + (System.currentTimeMillis() - ts));
            request.response().end();
        });

        // handle upload errors
        request.exceptionHandler(
                (Throwable throwable) -> {
                    throwable.printStackTrace();
                    response.setStatusCode(HttpResponseStatus.BAD_REQUEST.code()).end();
                }
        );
    };

GoCD and Docker

Posted: January 25, 2015 in Uncategorized

I’m currently working on a Docker-plugin for GoCD. You can find the project right here. Since GoCD 15 has a different plugin-api, I will release a new plugin soon.

The Vertx 2 “multithreaded” Option

Posted: January 15, 2015 in Java
Tags: ,

The current threading model in Vertx 2 is very powerful. So powerful that you can easily put yourself out of business. In a project I’m currently working on, we created one (micro)service which does no more than store data in a MongoDB instance and creates a in-memory Lucene to provide a search functionallity. The functionality was trivial. There is a REST interface to do basic CRUD operation, a search resource and one special feature: an importer.

The search-resource contains nothing more than code to publish a message to a Lucene worker-verticle. This worker-verticle uses a LocalHandler to listen for events. And all worked well, since we had more then +2000 wildcard-search-operation per seconds. Internally we have 20 worker threads.

As we went on with the development, we encountered a problem during the import-process last week. A legacy system uploads a file to the import resource of the service. The importer receives the file, validates the content, and starts the import. The data was first stored in the MongoDB and when this call was finished, the same data was added to the Lucene index across the cluster.

One problem however was that all the worker-threads were blocked during the import-process leaving no room for parallel requests. After a brief search we saw that MongoDB-verticle was consuming all the worker-threads leaving no threads available for the Lucene-Index-worker. Like this, the node is not reachable and we had the negative effect that the supervision-service was marking this node as being down. We had to reduce the number of threads of the MongoDB-verticle. But, looking in the GitHub code of this verticle, we saw that the module had the property “multithreaded”: true in the mod.json. This is sometimes good, but in our case bad. We’ve forked this project and made the verticle no longer multithreaded. To have enough capacity during the import, we’ve instantiated the MongoDB-verticle five times. We currently run with 20 worker threads, so we always have room to keep the search function alive.

Lessons learned is that even in predefined modules, you need to fine-tune the threading-strategy.

When you want to do Continuous Integration, Delivery and Deployment, you sometimes want to use the latest available artifacts in your builds. Just as with all things in life, there are people who oppose and people who advocate this approach. Of course, this will not be the silver bullet, but for certain projects it can be useful. In those kind of projects you just want to reference a range of versions for a certain artifact instead of relying on a static version number. Let me illustrate this with an example.

Let’s consider the following scenario. Suppose you work on a web-project which depends on a smaller library. Let’s assume it is a simple, plain JAR file with tests etc. This JAR file is of course subject to frequent change. You can build your build-pipeline in such way that your JAR is compiled, tested and delivered on every change. When a new artifact is uploaded to the repository, you can automatically trigger a new build of your web-project.

When you use maven, the difficult question is how you set a plausible version number without checking the code back in (which would trigger a loop in your build-process). Then we must solve the question about how the web-project will reference the library. Most likely, your project is stuck on the dependency in version X.Y.Z. The version number is not increased on every build because it is not patched nor did the functionality change. Moreover, you don’t want to check out the web-project every time to increase the version number of the depending library manually.

A solution here is to enhance your versioning system with a build number and use ranges. Both needed features are supported by Maven itself (time of writing 3.2.3) and the versions-plugins.

The Version Problem

Maven is not that great when it comes to CI. The way Maven handles the versions is not very flexible. But, if you really think about the problem, you’ll realize that it is the way how software is versioned that seems to be problematic. The classical X.Y.Z scheme is particularly helpful for humans and human controlled workflows. The decision which number (major, minor, patch) has to be increased, remains a human decision. This has definitely advantages. Through this version-scheme, you communicate with the outside world how drastically your software has changes.

But it is very impractical for a CI system because it can’t automate this way of thinking. The CI only knows about the number of builds it has made. The CI just knows build #1, #2, etc etc.

You don’t want to loose the version-information nor the build number. A possible solution is to combine the two as one. You can change the versioning scheme to X.Y.Z-buildnumber.

The Library POM

Using the above scheme is nice. But, there is one trade-off. You don’t want to store the buildnumber in the source-code. This has been done in the past, as I can tell out of my own experience and to a certain extent this was plausible and functioned well. A drawback was that the Maven Release Plugin stored data in SVN itself.

Our goal is to keep the POM as clean and simple as possible. We don’t want to pollute the version-number either. The scheme X.Y.Z-buildnumber is useful in the world of CI but not in your local environment where the X.Y.Z scheme is the one which is under your control. So, let’s keep it simple and use the X.Y.Z scheme in the source-code and during development.

There is no need to have a build number locally nor in your source-code.

Building the Library

When the CI builds the library, it can add the build-number on the fly. This example uses the GO pipeline counter as the build number we want to add to the version-number. Unfortunately you cannot append it easily to the POM. You first have to read the version, add the build-number yourself and use it in the POM. Luckily we can use the power of Linux to put this in one line:

mvn versions:set -DnewVersion=$(mvn help:evaluate -Dexpression=project.version 2>/dev/null| grep -v "^\[")-$GO_PIPELINE_COUNTER

The version number is not checked in. Committing the code back to Git would trigger a new build and put us in a loop. We gave this a lot of thought and since there is no real use-case for this, we decided not to do it. It doesn’t solve nor create problems.

A note on placeholders

In earlier days it was possible to use placeholders in the version-tag. In Maven3 this is prohibited except for three fields. When you use the version-tag in the source-code, you locally have a problem because you have to provide the information which is cumbersome and error-prone. So, we decided not to use this system although you could easily set them using the CLI in your CI-environment.

The POM with the Dependency

In our example, we want to automatically build a dependent project when changes in the above library occurs. You have to be aware that this approach has advantages but could also have some disadvantages. You could easily break the software when there are non-backward-compatible changes. In our philosophy, that is a good thing. We need to keep our software up and running with the newest libraries and this involves break and incompatibilities. We need to detect failures as soon as possible. Of course, this is not the wanted behavior for all kinds of projects. You need to make the decision yourself if you want this system or not.

Let’s continue. In the depending project, you define a dependency but not with a static version but with a range:

<dependency>
   ...
   <version>[1.0.0, )</version>
</dependency>

This defines the range of version you want to compile with. This is an open end version, so we will try to compile with every new version of the library. When the compilation or tests fail, we will be informed so we can fix the issue as soon as possible.

Normally seen, Maven should be able to resolve the ranges, but due to some bugs (currently we use Maven 3.2.3), the range do not gives the correct artifact. But there is a work-around with the versions-plugin. In a first step, you resolve the ranges:

mvn versions:resolve-ranges

This will replace the ranges with the latest version found in the repository. Your artifact will be built and tested with the latest dependency. Eventually you can auto-deploy your artifact directly.

If you want to override the language of the Glassfish 4 Server itself, you can add the following option to the JVM options of the domain:

-Duser.language=en

You can add it to the domain.xml directly or you can open the admin console, go to the configuration pane, open the server-config, JVM-settings and then the JVM-option tab.

For the REST-purists under us, a resources ending with a slash is not the same as a resource ending without a slash. However, the difference between url://xxxx.com/a and url://xxxx.com/a/ is ignored. The trailing slash is omitted.

When you try to map a resource with a trailing slash using the standard @Path annotation, the methods get mapped to the same endpoint causing an exception.

@RequestScoped
public class TestResource {

 @Path("{path}")
 @GET
 public String test1(@PathParam("path") String path) {
 return "test1";
 }

 @Path("{path}/")
 @GET
 public String test2(@PathParam("path") String path) {
 return "test2";
 }
}

Glassfish answers during the deployment with:

SEVERE: Following issues have been detected: WARNING: A resource model has ambiguous (sub-)resource method for HTTP method GET and input mime-types as defined by @Consumes and @Produces annotations at Java methods … These two methods produces and consumes exactly the same mime-types and therefore their invocation as a resource methods will always fail.

In Glassfish which uses the Jersey under the hood, the slash is also ignored. But there is a workaround. You can use regex in the Path-annotation to map resources ending with a slash. In the example at the end, I use regex to map the paths to the methods. But there is something you should know about the path-matching algorithm. The mapping algorithm uses the following rules:

The JAX-RS specification has defined strict sorting and precedence rules for matching URI expressions and is based on a most specific match wins algorithm. The JAX-RS provider gathers up the set of deployed URI expressions and sorts them based on the following logic:

  1. The primary key of the sort is the number of literal characters in the full URI matching pattern. The sort is in descending order.
  2. The secondary key of the sort is the number of template expressions embedded within the pattern, i.e., {id} or {id : .+}. This sort is in descending order.
  3. The tertiary key of the sort is the number of nondefault template expressions. A default template expression is one that does not define a regular expression, i.e., {id}.

In the following example,  test2 is found before test1 because the regex is longer! We check the trailing slash first. When it fails, it falls back to test1.


@RequestScoped
public class TestResource {

 @Path("{test:.*}")
 @GET
 public String test1(@PathParam("test") String test) {
 return "test1";
 }

 @Path("{test:.*[/]}") // takes precedence
 @GET
 public String test2(@PathParam("test") String test) {
 return "test2";
 }
}

It is not unusual to use MySQL’s YEARWEEK() function to create identifiers for weeks within years. The problem is well-known. You need to store the week number for a certain year. Storing only the week-number decouples your data from the real year. You need to store the year too and that’s when YEARWEEK kicks in. But beware, there are some pitfalls!

MySQL

The YEARWEEK()-function gives us something like 201304 for the fourth week of 2013. But that’s only half the story. Problems arise when you want to know the week-number for 31.12.2012. This could be 201253 or 201301, depending on how you see it. The week could start on monday, or sunday and the first week of the year starts after 4 days or not.

There is an agreement on the week-calculation. It defined as ISO 8601. The first weeks must have at least 4 days and starts on a monday. See http://en.wikipedia.org/wiki/ISO_8601 for more information.

Unfortunately, MySQL uses mode “0” for the week-calculation. It is not the ISO 8601 norm. You must set MySQL to use mode “3” (default_week_format) or pass it as a parameter in the YEARWEEK() function.

Mode First day of week Range Week 1 is the first week …
0 Sunday 0-53 with a Sunday in this year
1 Monday 0-53 with more than 3 days this year
2 Sunday 1-53 with a Sunday in this year
3 Monday 1-53 with more than 3 days this year
4 Sunday 0-53 with more than 3 days this year
5 Monday 0-53 with a Monday in this year
6 Sunday 1-53 with more than 3 days this year
7 Monday 1-53 with a Monday in this year

So, the following problems are solved:

select YEARWEEK("2012-12-31");

gives us 201253.<br />

select YEARWEEK("2012-12-31", 3);

gives us 201301 which adheres to the ISO 8601.

Ok, this problem seems solved. The database seems to have the dates correct. But if I want to query the week, I cannot always rely on the database to calculate me the correct date. Sure, I could do round-trips to the server sending a date and receiving the correct week number. But that’s slow.

Java

Let’s try to rebuild the YEARWEEK() in Java using the ISO 8601 norm. The Calendar-class is not the solution we’re looking for. Sure, you can get the week, but you can’t get the correct year for that week when you’re in the  ISO 8601 mode. For example, for 2012-12-31 you get the week 01 but the year 2012, resulting in 201201 for the last week of the year. Which is of course, incorrect!

The package Joda helps us out and provides the solution. Out of legacy-grounds, the API works with a calendar object. Joda provides us the correct week in the year and the correct year for the that week (even though the real year is different).

I wrote a test-class with the Java-method to generate the values.

import org.joda.time.DateTime;

// ....

/**
* Return the ISO 8601 format of YEARWEEK with a given calendar.
*/
public int from(final Calendar calendar) {
    DateTime dt = new DateTime(calendar) ;
    return dt.weekyear().get() * 100 + dt.weekOfWeekyear().get();
}

I’ve tested the results agains the MySQL YEARWEEK for 12 years and all seems to work fine!