For some time now I’ve been working on a pet project which is an application composed of a system of microservices running on JVM (Scala). At the moment there are 6 services that are mainly communicating over messages but I will write more on that in another post.
All services are packaged into docker images and deployed to the Elastic Computer Service (ECS) cluster. During development I like to run some of those services locally so I can interact with the application. One of the ways we can execute a system of dockerized applications is to simply use docker-compose, which is something I’ve been using already. The only downside is that every time you want to test a change you have to re-build your docker image. So now you have the following sequence of steps to perform:
build fat jar -> docker build -> docker compose
This may not be a big deal for your case, but some of my docker images were taking quite a bit time to build. For what I needed to accomplish it was not necessary to run dockerized applications. I could just run them directly in local JVM. Of course, for the integration tests, running dockerized services with a prod like environment would be still necessary. In order to start several JVM applications I’ve been using
tmux where I would start several sessions and execute each jar in it’s own tmux window. This works well although a little tedious. You have to switch to each window to start the service.
I wanted to get a similar experience
docker-compose provides where with a single command one can start multiple dockerized services. With all the output lines flowing in.
After a bit of digging I stumbled upon GNU utility called parallel. After some trial and error I ended up with the following command.
-j0 - use all available CPUs
--line-buffer - interleave stdout lines from each running command
app*.sh may look something like this:
So, here we go,
parallel to the rescue.