When to use Pessimistic Locking

There are cases where we need to use a pessimistic locking strategy. While optimistic updates are an absolute minimum, we deploy a pessimistic locking strategy into a carefully thought out design. We use pessimistic locking strategies in two primary cases:

  • As a semaphore to ensure only a single process executes a certain block of code at a time
  • As a semaphore for the data itself

Let’s first make sure we agree on the term semaphore. In this context, we’re talking about a binary semaphore, or a token. The token is either available or is held by something. The token is required to be able to proceed with a certain task. A request is made to obtain the token and it is returned if available. Options when it’s not available are to wait for it, blocking until it becomes available or to fail.

Using a semaphore to ensure that certain blocks in your code are only run by a single process at a given time is a great strategy to facilitate load balancing and failover safely. We can now group related changes under token, requiring any other process endeavouring to do the same thing to wait for our lock to release. When the process has terminated either successfully or abnormally, the token is then returned by a commit or rollback.

When used as a semaphore for the data itself, it allows us to ensure the details of the data are consistent for the duration of our block of code. Changes will be prevented thereby ensuring that the actions we take are based on a consistent state for the duration of the code. For example, we may want to make a collection of changes on some foreign key related data based on the parent and need to ensure that the parent data is in an unchanged state. Pessimistically locking the row allows us to guarantee just that. And again, by relying on our existing infrastructure for managing the transaction, either successful completion or failure resulting in a commit/rollback releases our lock.

As mentioned, pessimistic locking does have the potential to cause performance problems. Most database implementations either allow you to not wait at all (thanks Oracle!), or at the very least wait for a maximum amount of time for the lock to be acquired. Knowing your database implementation and its ins and outs is crucial if you’ll be using this approach.

Our new scheduler has been tested with hundreds of concurrent jobs running across dozens of instances without any collisions or deadlocks or any degradation in performance. Drop us a line if you’d like to learn more how we were able to do so.

Leave a Reply

Your email address will not be published. Required fields are marked *

3 × seven =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>