Sizing your Connection Pool


I started to build up the ideas presented in this post while reading an email (sent to the play-framework mailing list) from the author of BoneCP, deprecating it in favour of HikariCP. From there, I read a really good article where the connection pool heuristic about sizing is explained, and another one with a thorough critique of different pools.

The formula below is provided by the PostgreSQL project as a heuristic to start trying different pool settings around this starting point:

connections = ((core_count * 2) + effective_spindle_count)

This made me think about the differences that choices about the connection pool can bring, and decided to do some deliberate experiments about how different implementations and settings would affect the latency of a typical data access layer.

The Problem

Imagine you are developing a great web platform, with a classic architecture: a front-facing web interface that calls REST services on the backend, that in turn hits the database and reply to the user with the content requested.

Once your platform reaches success, the application server is overloaded, and so is the database.

At this point, if you didn’t plan accordingly, the answer to the question:

“Will my users stay, or will they go away experiencing the website is sloooow?”

won’t be really pleasing.

Actually, you should have planned for it, and this is the goal of this article:
a simple way to setup things and start doing capacity planning, focusing on one of the lowest layers of the architecture, the database connection pool. Most of web platforms are deployed on clusters, but I’ll just start simple and understand this small piece of the puzzle and build things up from here.

The Connection Pool

Once the website is hit with 10k users, you could easily have around 20k tps on db.

  • How do you manage database connections?
  • Do you open one connection for each client request?

Of course not, since opening and closing database connections is expensive,
so there are several implementations out there that provide just this:
a cache to reuse existing connections, so that clients can be served faster.

For these tests, I considered the following connection pools:

There are also several others, for example the one provided in Tomcat.
Anyway I don’t want to advice one pool over the other, since you should always simply define your use case, measure the performances, tune and then decide for the best that suits you. And by the way the query is just a simple SELECT * FROM table WHERE id = 1;

Show Me The Numbers!

I started to develop a small infrastructure to test the latency, but then I did some research and found out that JMH is a much more advanced tool that suits the purpose very well. I started from scratch using the maven archetype, and then including a simple data access layer implemented with Mybatis.

If you are really serious about measuring latency, you should watch Gil Tene’s excellent talk on How NOT to Measure Latency, where he discusses coordinated omission and its impact on latency measurement. The numbers shown in this experiment mostly suffer for this, and maybe one day I’ll experiment with his HdrHistogram.

I decided to focus test towards one specific aspect of the pools, the latency of a request.
You can find the code to run these tests on github, where pull requests are welcome to fix or improve things.
The test configures 2 threads to hit the db, with 2 scenarios: fixed or increasing pool size.

The Results

I report here just the ones for HikariCP, if you are curious about other pool implementations, just checkout the code and run it yourself.

Fixed pool size

# Benchmark: fm.mox.mybatis.jmh.HikariCPBenchmark.testGetById
# Parameters: (maximumPoolSize = 1)
Result: 1.508 ±(99.9%) 0.119 ms/op [Average]
Statistics: (min, avg, max) = (0.573, 1.508, 22.413), stdev = 1.231
Confidence interval (99.9%): [1.389, 1.627]
Samples, N = 1164
mean = 1.508 ±(99.9%) 0.119 ms/op
min = 0.573 ms/op
p( 0.0000) = 0.573 ms/op
p(50.0000) = 1.237 ms/op
p(90.0000) = 2.060 ms/op
p(95.0000) = 2.758 ms/op
p(99.0000) = 6.838 ms/op
p(99.9000) = 21.900 ms/op
p(99.9900) = 22.413 ms/op
p(99.9990) = 22.413 ms/op
p(99.9999) = 22.413 ms/op
max = 22.413 ms/op

# Benchmark: fm.mox.mybatis.jmh.HikariCPBenchmark.testGetById
# Parameters: (maximumPoolSize = 5)
Result: 0.919 ±(99.9%) 0.087 ms/op [Average]
Statistics: (min, avg, max) = (0.489, 0.919, 21.692), stdev = 1.142
Confidence interval (99.9%): [0.831, 1.006]
Samples, N = 1861
mean = 0.919 ±(99.9%) 0.087 ms/op
min = 0.489 ms/op
p( 0.0000) = 0.489 ms/op
p(50.0000) = 0.669 ms/op
p(90.0000) = 1.198 ms/op
p(95.0000) = 1.836 ms/op
p(99.0000) = 5.637 ms/op
p(99.9000) = 20.902 ms/op
p(99.9900) = 21.692 ms/op
p(99.9990) = 21.692 ms/op
p(99.9999) = 21.692 ms/op
max = 21.692 ms/op

# Benchmark: fm.mox.mybatis.jmh.HikariCPBenchmark.testGetById
# Parameters: (maximumPoolSize = 100)
Result: 1.290 ±(99.9%) 0.135 ms/op [Average]
Statistics: (min, avg, max) = (0.553, 1.290, 33.456), stdev = 1.535
Confidence interval (99.9%): [1.155, 1.425]
Samples, N = 1412
mean = 1.290 ±(99.9%) 0.135 ms/op
min = 0.553 ms/op
p( 0.0000) = 0.553 ms/op
p(50.0000) = 0.880 ms/op
p(90.0000) = 2.324 ms/op
p(95.0000) = 3.387 ms/op
p(99.0000) = 5.200 ms/op
p(99.9000) = 33.388 ms/op
p(99.9900) = 33.456 ms/op
p(99.9990) = 33.456 ms/op
p(99.9999) = 33.456 ms/op
max = 33.456 ms/op

Increasing pool size

# Benchmark: fm.mox.mybatis.jmh.HikariCPBenchmark.testGetById
# Parameters: (maximumPoolSize = 1)
Result: 1.129 ±(99.9%) 0.053 ms/op [Average]
Statistics: (min, avg, max) = (0.507, 1.129, 15.450), stdev = 0.642
Confidence interval (99.9%): [1.076, 1.182]
Samples, N = 1592
mean = 1.129 ±(99.9%) 0.053 ms/op
min = 0.507 ms/op
p( 0.0000) = 0.507 ms/op
p(50.0000) = 1.022 ms/op
p(90.0000) = 1.290 ms/op
p(95.0000) = 1.486 ms/op
p(99.0000) = 3.264 ms/op
p(99.9000) = 12.905 ms/op
p(99.9900) = 15.450 ms/op
p(99.9990) = 15.450 ms/op
p(99.9999) = 15.450 ms/op
max = 15.450 ms/op

# Benchmark: fm.mox.mybatis.jmh.HikariCPBenchmark.testGetById
# Parameters: (maximumPoolSize = 5)
Result: 0.884 ±(99.9%) 0.061 ms/op [Average]
Statistics: (min, avg, max) = (0.482, 0.884, 12.026), stdev = 0.833
Confidence interval (99.9%): [0.823, 0.946]
Samples, N = 2003
mean = 0.884 ±(99.9%) 0.061 ms/op
min = 0.482 ms/op
p( 0.0000) = 0.482 ms/op
p(50.0000) = 0.678 ms/op
p(90.0000) = 1.174 ms/op
p(95.0000) = 1.910 ms/op
p(99.0000) = 5.175 ms/op
p(99.9000) = 11.836 ms/op
p(99.9900) = 12.026 ms/op
p(99.9990) = 12.026 ms/op
p(99.9999) = 12.026 ms/op
max = 12.026 ms/op

# Benchmark: fm.mox.mybatis.jmh.HikariCPBenchmark.testGetById
# Parameters: (maximumPoolSize = 100)
Result: 0.593 ±(99.9%) 0.023 ms/op [Average]
Statistics: (min, avg, max) = (0.384, 0.593, 8.765), stdev = 0.386
Confidence interval (99.9%): [0.571, 0.616]
Samples, N = 3089
mean = 0.593 ±(99.9%) 0.023 ms/op
min = 0.384 ms/op
p( 0.0000) = 0.384 ms/op
p(50.0000) = 0.542 ms/op
p(90.0000) = 0.696 ms/op
p(95.0000) = 0.797 ms/op
p(99.0000) = 1.834 ms/op
p(99.9000) = 7.297 ms/op
p(99.9900) = 8.765 ms/op
p(99.9990) = 8.765 ms/op
p(99.9999) = 8.765 ms/op
max = 8.765 ms/op


The tests were ran on my MacBook Pro i7 with 2 cores, and changing a bit the JMH configuration it is easy to change some parameters and run on a 16 cores box.

From these numbers we can obviously (from this article) see that the pool size affects latency, and the impact is more significant on the fixed size pool than on the increasing pool, because with 2 threads issuing requests the pool would not be required to grow that much over the initial size.


This post was oriented at showing how to lay out a methodology for performance tuning rather than telling what’s the best connection pool implementation. I wrote it just to satisfy my curiosity and learn a thing or two about connection pool sizing and performance measuring.

If you are serious about performance, always measure your assumptions and be aware about the fallacies about the coordinated omission that you may be incurring into. Hope it satisfied your curiosity as well and provided some food for thought.


Inversion of Control

Since probably from Tuesday I will be busy studying Spring and other technologies, I’d like to dive in a bit and see what are the foundations of this platform. Probably this topic will gain deepness with time, but now let’s start really simple and watch one feature that makes Spring different from EJB: the Inversion of Control pattern.

Imagine to have two classes, Door and Handle. The Door has a private variable that is its Handle, and when it is instantiated with new Door(), it will build a new Handle() inside its constructor as seen in the following diagram.


If we set up things in this way, there is a tight coupling between Door and Handle, and probably every change to the Handle class (for example, adding a parameter to the constructor) will affect even the Door class: to prevent this high coupling (class Door is responsible even for the creation of the Handle), we could have a IoC framework, that will have the responsibility to inject the instance of class Handle to the class Door, confining changes of class Door to the IoC framework. Class Door will only have a private variable Handle and a method setHandle(Handle h) to get the reference from the IoC framework.


In this case, class Door and Handle are independent, since the IoC framework uses the public method of class Door to set up the Handle.However, there are three different ways to implement the IoC pattern, and every web framework uses its implementation.

Setter-based IoC: like in the example I showed before, we use a setter method to inject the referred object (Handle) in the referring object (Door). In this way, even if Handles change a lot during its lifetime in the development, we prevent Doors from changing: the main drawback is that we expose the internal variables with setter methods, violating the key OO principle of classes, that is encapsulation of data.

Constructor-based IoC: we use a constructor to set the reference of the object, so that only the creator knows about the referenced object.

Interface-based IoC: objects have to implement a specific interface of the IoC framework, so that the framework will be able to properly inject the objects. In this way there is no need for an external configuration file with the objects’ references. This locks your application to a specific IoC framework.

I am sure this approach has a lot of advantages, most of all when you don’t know much about the referred class and how it will change in the application lifecycle, but i think that in order to fully understand its potential it should be seen in a well defined context, comparing the same web application written with EJB and with Spring that accompanies the IoC pattern with principles of Aspect Oriented Programming. Hope to learn this in the future.

By reference or by value?

Since this is my first post, i will start really small and build from there in the future.

I’d like to show some code about how java passes arguments between functions. In the first example we will use primitive types (int, float, etc), while in the second we will use instances of some class.

First example:

public void testModifiedString() {
String originalString = "first string";
int originalNumber = 10;
StringModifier test = new StringModifier(originalString, originalNumber);
String beforeMethodOne = test.string + " " + test.number;
StringModifier.methodOne(test.string, test.number);
String afterMethodOne = test.string + " " + test.number;
assertEquals(afterMethodOne, beforeMethodOne);

Where StringModifier has two public fields, string and number, and methodOne(String s, int number) is:

public static void methodOne(String s, int number) {
s = "string modified by methodOne";
number = 0;

Everyone can guess the output of this test, right? It will print out

first string 10
first string 10

suggesting us that, in Java, the primitive types are passed by value: methodOne will work on copies of its parameters, not affecting the original ones.

Let’s go one step further, and let’s try this with classes. Let’s take a look at the following short test case:

public void testMoveCircle() {
int origin = 0;
Circle myCircle = new Circle(origin, origin);
int delta = 10;
CircleMover circleMover = new CircleMover();
circleMover.moveCircle(myCircle, delta, delta);
assertEquals(origin + delta, myCircle.getX());
assertEquals(origin + delta, myCircle.getY());

We build a new Circle in the origin (0, 0) and print its coordinates. Then, we build an object CircleMover that will try to move myCircle to a new place (10, 10) and reprint the coordinates.
The output will be:

coordinates: x = 0 y = 0
coordinates: x = 10 y = 10

as we could guess from the test. But if we look inside the moveCircle‘s body, we notice something we didn’t expect:

public void moveCircle(Circle circle, int deltaX, int deltaY) {
circle.setX(circle.getX() + deltaX);
circle.setY(circle.getY() + deltaY);

//code that tries to assign a new reference to circle
circle = new Circle(0, 0);
circle = null;

This means that the method changes the circle’s coordinates, but also tries to modify the reference to the object passed, building a new Circle. Seeing that we can’t overwrite the reference circle, we guess that it is just a pointer passed by value to the function.

I published a gist on github if you want to let you see and download the files and/or add tests. Contact me if you find errors or want to share some ideas.