The best DROP-script for Mysql!

Posted: July 30, 2013 in MySQL, SQL, Ubuntu

There are many situations in which you want to drop all tables in a MySQL database. You can easily drop the schema, but then all the permissions are lost too. There are several solutions out there, but they all require some manual effort. I wanted to get rid off it once and for all and constructed this easy script. With the following evil shell script you can easily drop all tables in your database.

Premisse: it uses a group on table_catalog which may not always be the correct grouping in your database. Please, feel free to adapt!

Just create a textfile like drop.sh and give it appropriate rights to execute. The paste the following content in the file and save it.

#!/bin/bash
echo Dropping database usr:$1, pwd:$2, db:$3
echo "set group_concat_max_len = 5000; select concat('set foreign_key_checks=0; ', group_concat(concat('drop table if exists ', table_name, ';') separator ' ')) from information_schema.tables where table_schema = '"$3"' group by table_catalog;" | mysql -u $1 -p$2 --disable-column-names $3 | mysql -u $1 -p$2 $3

Use ./drop.sh root root my_db et voilá…. all tables are gone.

Off course, things can be improved. I need to check the parameters and add some documentation…

I am experimenting with Glassfish 4 to prepare in order to move some application from J2EE6 to J2EE7. Glassfish 4 works with J2EE 7 and introduces some new concepts. The one which we are investigating today is the use of @Asynchronous and @Suspended in REST resources.

The use of the asynchronous annotations is pretty well specified for beans but there are some pitfalls when it comes to REST services. Let’s go through an example and check the behavior of a REST service with asynchronous methods as we go.

The use case is the following. We define a Resource and one Bean. The resource is a typical REST-resource with only one GET-method. The bean is a normal, stateless session bean which performs a long running task. This bean is pretty straightforward. We do not annotate the methods of the bean as asynchronous.

@Stateless
public class LongSession {
   public String doLongTask() {
       try {
           // do a long task
           Thread.sleep(10*1000);
       } catch (InterruptedException ex) {
           // ignore
       }
       
       return "done";
   }
}

The first resource we build is pretty simple. We create an @Stateless resource and inject a @Suspended AsyncResponse. AsyncResponse takes care of the asynchronous response when the results become available.

@Path("as")
@Stateless
public class AsResource {

    @EJB
    private LongSession longSession;
    
    @GET
    @Produces(MediaType.TEXT_HTML)
    @Asynchronous
    public void getXml(@Suspended AsyncResponse resp) {
        System.out.println("do a long task in thread: " + Thread.currentThread().getName());
        String result = this.longSession.doLongTask();
        Response r = Response.ok("this is the " + result).build();
        resp.resume(r);
    }
 
}

When we open the browser and point to the URL of this resource we will see that the request is blocked for 10 seconds! But that is nothing new. Now, the problem is that we want to know how many threads are available to serve these requests. We are using a stateless bean, so we use the thread-pool of the ejb-container. This has some implications. When we use the http-thread-pool we will basically the same behavior, but we are not interested in the request thread. I want to scale out the ejb-pool. Adding beans to the pool is apparently not enough.

When you press F5 continuously in the browser, you will see something like “INFO: do a long task in thread: __ejb-thread-pool1” in the log. This number counts up and exceeds the threads in the http-thread-pool thanks to the AsyncResponse and @Suspended. But you will see (in a freshly installed domain) that the thread-pool does not exceed the limit of 16 even though you have max. 64 beans in your pool. We need to finetune the thread-pool of the ejb-container. But, you won’t find any properties in the administrator console. You need to add them yourself. Open the domain.xml of your domain and add the following lines:

     <ejb-container max-pool-size="64" steady-pool-size="7">
        <property name="thread-core-pool-size" value="10"></property>
        <property name="thread-max-pool-size" value="20"></property>
        <property name="thread-queue-capacity" value="25"></property>
        <ejb-timer-service></ejb-timer-service>
      </ejb-container>

Now, rerun your application. You will see that the thread-pool goes up to 10 when you press F5 in the browser without holding it down. It seems to stagnate on 10 although you kinda specified a max-pool-size of 20. When you continuously press F5 you will suddenly see the threads go up to 20 before throwing an java.util.concurrent.RejectedExecutionException. Nice, but what the hell happened?

Let’s dig deeper in the documentation of the thread pools:

thread-core-pool-size: Specifies the number of core threads in the EJB container’s common thread pool. The default value is 16. Great, there we have our number 16. Setting this to 10 oder 100 will change the actual number of threads doing some work.

thread-max-pool-size: Specifies the maximum number of threads in the EJB container’s common thread pool. The default value is 32. Nice, increasing this to 100 will be the maximum number of threads we can use? Yes and no. You have to consider the default-value of thread-queue-capacity.

thread-queue-capacity: Specifies the size of the thread pool queue, which stores new requests if more than thread-core-pool-size threads are running. The default value is the Integer.MAX_VALUE.

Here starts the confusion. Your queue-capacity is way too high. Pressing F5 will never reach MAX_VALUE, so your core-pool-size never change nor scale. You must limit your capacity first before the thread-pool is scaled with maximal max-pool-size threads. In our example, we will scale when we reach 25 waiting requests. It will scale up to 20. When all threads are used in parallel the container throws an exception.

In the past SUN declared correctly: “That is exactly how it is supposed to behave. First the threads grow to coreSize, then the queue is used, then *if* the queue fills up then the number of threads expands from coreSize to maxSize. Hence if you use an unbounded queue the last part never happens. This is all described in the documentation. If you want an unbounded queue but more threads then increase the core size. Otherwise consider whether a bounded queue is more suitable to your needs.”

Some extra information can be found here https://java.net/jira/browse/GLASSFISH-17735 and http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ThreadPoolExecutor.html.

Mounting Samba-share in Ubuntu

Posted: May 8, 2013 in Ubuntu

My DVD-reader gave up on me some time ago. I have two laptops with Ubuntu and wanted to use the DVD-reader from the other laptop on the first one. You can easily share the DVD-drive on one laptop using the properties from the file explorer. On the other side, you can see your laptop and the shared drive. But but but, you can’t access the drive from some applications because they do not support Samba directly (which is rather evident). So, you need to mount the drive on a local directory. Just type:

sudo mount -t cifs -o username=yourname,password=yourpassword //your-ip-here/sharename mymount

The mymount directory must exist. Just create one with mkdir. I had trouble using the hostname instead of the IP. With the direct IP address, everything worked as planned. When you look into your mymount-directory you will see the content of your share. You can use this directory in every application.

I recently updated to 12.10 and made the jump to 13.04 shortly after that. Unfortunately, Handbrake refused scanning DVD due to an error in libdvdnav. Apparently something went wrong during the dsitribution upgrade. To get libdvdnav and Handbrake working again, just run:

sudo /usr/share/doc/libdvdread4/install-css.sh

Restart Handbrake and your DVDs can be read again….

Moving on….

Posted: April 15, 2013 in Uncategorized

I am currently moving my old blog entries to this new blog. Some old posts will not be transferred because time and software also moved on. The old blog address will be abandoned and disappear in the near future.

ActiveMQ and Network Interfaces

Posted: April 15, 2013 in ActiveMQ

When you are using ActiveMQ as a message broker, the need can arise to bind the queue listener to a specific network-interface. Although this is not very common and some argue that the OS should do the routing, you can face certain situations in which your software decides which network-interface SHOULD or MUST be used for the communication.

The reason could be that some communication-channels are expensive and the machine needs to switch network-interfaces on the fly because of you business logic. When the servers are not under your control you cannot simply change the OS routing tables.

There is a simple trick in ActiveMQ to select the network-interface to use, although you cannot find it in the manual — but it is in the code. Just use the following scheme for the binding:

tcp://remote-queue-ip:61616@network-interface-ip:61616

You can also use other protocols, of course.

For one of my customers, we needed a very simple workflow-framework which coordinates the flow between methods inside a J2EE6 application on Glassfish. The business required to implement the following simple logic:

  1. Create a job.
  2. When step 1 was successful, then start a job (which affects some thousand objects) and send an email
  3. End a job when step 2 was successfull.
  4. Mark the job as failed when one of the steps couldn’t be completed due to the implemented business-logic.
  5. Provide a handler to handle exceptions.

Taking a fully fledged workflow-system was way over the top to solve this problem. The workflow will also remain unchanged; and when it is changed, the code needed to be altered to. We needed to come up with a simple workflow-management which is bound to transitions and methods.

So, first we decided to create an observer the change the transitions of the workflow-steps. This is basically a very simple method which alters the status and stores it in the database. The next step was to create some annotations to add to the methods which needed to be executed during a certain step. We annotate our workflow methods as follows:

@Asynchronous
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
@Workflow
@WorkflowStep(from = Status.IDLE, to = Status.PROGRESS)
public void startJob(@WorkflowActionHandler
                     @Observes(notifyObserver = Reception.ALWAYS,
                               during = TransactionPhase.AFTER_SUCCESS)
                     final JobWorkflowEvent event) {}

This straight-forward approach is very clean and simple. Basically it says that this method will be executed after the transition from a Status.IDLE to Status.PROGRESS. Although we use an enum here, you could take any arbitrary integer. Using the CDI annotations we get some additional power. This method will only be executed after the success of the previous step. Here you can combine the full power of CDI with some basic workflow concepts to create a nice system.

Now remains the problem of the transition handling. The transitions are handled in an interceptor which is marked by the @Workflow annotation.

@Interceptor 
@Workflow
public class WorkflowInterceptor {

}

This interceptor does not do much more than changing the transition state when the method pinged by CDI has the correct workflow-steps. It uses some reflection to figure that out and allows access to the method.

To handle exceptions, we introduce a new annotation:

@Asynchronous
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
@Workflow 
@WorkflowException
public void excJob(@WorkflowActionHandler 
                   @Observes(notifyObserver = Reception.ALWAYS,
                             during = TransactionPhase.AFTER_FAILURE)
                   final JobWorkflowEvent event) {
}

Whenever one step fails unexpected, control is transferred to this method where certain actions can be executed. We need this annotation to prevent other methods to pick this event up. 

Using CDI together with some custom annotations really did the job and works fine.

JPA, Criteria and custom objects

Posted: April 15, 2013 in Glassfish, JPA, SQL

Suppose you have a SQL-statement where you select custom fields and or the result of some functions. To make it more interesting, we want to combine this with a group-by statement and an ordering. Let’s us for example say you want to execute the following query in MySQL:

SELECT uuid, GROUP_CONCAT(field1, ':', field2),
       MIN(ss_date_started), 
       MAX(ss_date_ended) 
FROM my_table 
WHERE key = 'xxx' 
GROUP BY group_key 
ORDER BY id ASC

The functions GROUP_CONCAT, MIN and MAX are availble in MySQL. The intention of the GROUP_CONCAT is to get some information out of all the group-rows with issuing additional selects. The question is how we can create the criteria for this query and how we can retrieve the results.

We want the results to be stored in the following object:

public class MyGroup implements Serializable { 
    private String uuid;
    private Map map = new HashMap();
    private Date dateStarted;
    private Date dateEnded; 

    public MyGroup() {
    }
}

In the map we want to store the GROUP_CONCAT strings as a map to avoid other SELECTs. First of all, we get the CriteriaBuilder and the root of the query. The MyEntityForTheTable is the entity which is mapped to the table. This object will act as the base to map our fields.

final CriteriaBuilder cb =
   this.getEntityManager().getCriteriaBuilder();

final CriteriaQuery cq = ...; // we'll talk about this later

final Root root = 
  cq.from(MyEntityForTheTable.class); 

The WHERE clause is pretty simple:

cq.where(cb.equal(root.get(MyEntityForTheTable_.key), key));

The GROUP-BY and the ORDER-BY is even more simple:

cq.groupBy(root.get(MyEntityForTheTable_.groupKey)).
   orderBy(cb.asc(root.get(MyEntityForTheTable_.id)));

Now we need to construct the function calls. We extract the MIN and MAX date out of the group using by writing the following expressions:

cb.function("MIN", Date.class, root.get(MyEntityForTheTable_.dateStarted));

cb.function("MAX", Date.class, 
   root.get(MyEntityForTheTable_.dateEnded)) 

We start with the GROUP_CONCAT in MySQL which can be written as:

cb.function("GROUP_CONCAT", byte[].class, 
    root.get(MyEntityForTheTable_.field1), 
    cb.literal(":"), 
    root.get(MyEntityForTheTable_.field2))

First of all, we cannot take String.class to read out the results. MySQL returns the string as byte-code, so we need to map it to a byte[]. The following arguments are the parameters for the function. The first and the last is the result of the columns. We also want to have a semicolon between the two values which can be achieved by cb.literal(String) function.

Now we need to execute the queries and map it to an object. We can use the construct() method to instantiate a new object inside our query. Unfortunately, our main domain object does not have the proper constructor. So we wrap the POJO class in a new class and add the specific constructor.

public static class WrappedGroup extends MyGroup {

  public WrappedGroup() { }
  public WrappedGroup(
     final String uuid,
     final byte[] serviceStatus,
     final Date dateStarted,
     final Date dateEnded) { }
}

We make sure that this constructor has the byte[] parameter. In this constructor we can convert the byte-array to a string. The you can convert the string to a map of your choice. So, we are almost done. Here is the complete code:

final CriteriaBuilder cb = 
  this.getEntityManager().getCriteriaBuilder();

final CriteriaQuery cq = 
  cb.createQuery(WrappedGroup.class);

final Root root = 
  cq.from(MyEntityForTheTable.class);

And the select part looks like;

cq.select(cb.construct(WrappedGroup.class,
   root.get(MyEntityForTheTable_.uuid), 
   cb.function("GROUP_CONCAT", byte[].class,
      root.get(MyEntityForTheTable_.field1), 
      cb.literal(":"), 
      root.get(MyEntityForTheTable_.field2)),
   cb.function("MIN", Date.class, 
      root.get(MyEntityForTheTable_.dateStarted)),
   cb.function("MAX", Date.class, 
      root.get(MyEntityForTheTable_.dateEnded)))); 

We then use a TypedQuery to get the results, and we are done:

final TypedQuery query =
   this.getEntityManager().createQuery(cq).
   setFirstResult(startResult).
   setMaxResults(maxResults);

The resulting list can be cast to List.

Et voilá, we have a very dynamic query which we can easily refactor and adapt.

Startup script for Ubuntu

Posted: April 15, 2013 in EC2, Play, Ubuntu

When you need to start a service (like Glassfish or Play) in Ubuntu, you can take this script as a template:

#! /bin/sh
case "$1" in
    start)
        ... execute a shell script here ...
        ;;
    stop)
        ... execute a shell script here ...
        ;;
    restart)
        ... execute a shell script here ...
        ;;
esac
exit 0

Make sure to make your scripts executable. In order to get it running during startup, add a link to the init.d directory and register the service-script to make it run as first service (depends on your system which priority you assign, take care here, I just took 01 as an example) 

cd /etc/init.d
sudo ln -sf  ~/your-script.sh your-script.sh
sudo update-rc.d your-script.sh defaults 01 01

Reboot and you’ll see that your services get started.

The EC2 AMI sometimes have the wrong character set. This could be very annoying in for example the German regio where characters like ä und ö are displayed incorrectly. Issue the following commands to switch your Ubuntu/Debian server to the correct character set:

Make sure these variables are set:

export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8

And don’t forget to reconfigure the character set.

locale-gen en_US.UTF-8
dpkg-reconfigure locales