Posts Tagged ‘Vertx’

Vertx3 is nearing its release date and we’re experimenting with the upcoming release to minimize future surprises. One of the major changes is the generation of fatjars. The Vertx2 fatJar plugin doesn’t exist anymore in the Vertx 3 world. Now Vertx 3 uses the maven-shade-plugin to generate the fat JAR. Unfortunately, the shader has its own problems. For example, it is very cumbersome to handle duplicate files especially in transitive dependencies which are something out of your control.

When you’re running into unsolvable errors with the maven-shade-plugin, you can easily switch to the onejar-maven-plugin. Configure your POM as follows:


This will create a new fat JAR with the project JARs as a complete file on the inside. They are not unpacked, so there are no issues with overwriting files in the META-INF. The downside is that they use an own boot-mechanism.

This again has some problems when for example certain JARs use the SPI mechanism or self-cooked mechanisms to load classes.


Easy streaming in

Posted: March 9, 2015 in Java
Tags: ,

If you want to stream an incoming request, you can use the following code:

    final Handler<HttpServerRequest> streamer = (HttpServerRequest request) -> {
        HttpServerResponse response = request.response();

        container.logger().info("Streaming... ");
        long ts = System.currentTimeMillis();

        // handle the content
        request.dataHandler((Buffer data) -> {
            container.logger().info("Received " + data.length());


        request.endHandler(v -> {
            container.logger().info("Done! " + (System.currentTimeMillis() - ts));

        // handle upload errors
                (Throwable throwable) -> {

The Vertx 2 “multithreaded” Option

Posted: January 15, 2015 in Java
Tags: ,

The current threading model in Vertx 2 is very powerful. So powerful that you can easily put yourself out of business. In a project I’m currently working on, we created one (micro)service which does no more than store data in a MongoDB instance and creates a in-memory Lucene to provide a search functionallity. The functionality was trivial. There is a REST interface to do basic CRUD operation, a search resource and one special feature: an importer.

The search-resource contains nothing more than code to publish a message to a Lucene worker-verticle. This worker-verticle uses a LocalHandler to listen for events. And all worked well, since we had more then +2000 wildcard-search-operation per seconds. Internally we have 20 worker threads.

As we went on with the development, we encountered a problem during the import-process last week. A legacy system uploads a file to the import resource of the service. The importer receives the file, validates the content, and starts the import. The data was first stored in the MongoDB and when this call was finished, the same data was added to the Lucene index across the cluster.

One problem however was that all the worker-threads were blocked during the import-process leaving no room for parallel requests. After a brief search we saw that MongoDB-verticle was consuming all the worker-threads leaving no threads available for the Lucene-Index-worker. Like this, the node is not reachable and we had the negative effect that the supervision-service was marking this node as being down. We had to reduce the number of threads of the MongoDB-verticle. But, looking in the GitHub code of this verticle, we saw that the module had the property “multithreaded”: true in the mod.json. This is sometimes good, but in our case bad. We’ve forked this project and made the verticle no longer multithreaded. To have enough capacity during the import, we’ve instantiated the MongoDB-verticle five times. We currently run with 20 worker threads, so we always have room to keep the search function alive.

Lessons learned is that even in predefined modules, you need to fine-tune the threading-strategy.