Tuesday, September 11, 2018

T-SQL Tuesday #106 - "Asynchronous" Triggers with Service Broker


Steve Jones has been kind enough to host this months T-SQL Tuesday, with the topic of triggers.  Everyone reading this is probably well aware of the downsides of triggers, chief among them the increased potential for dead locks and the difficulties they can pose when debugging.  A pattern we've successfully used to minimize these issues is to leverage service broker to move the business logic that would be in a traditional trigger outside of the scope of the transaction.

In one prominent case there were a number of factors that lead this to be a good solution.  We had a fairly active transactional table that was both wide and tall, and a business requirement that called for a large number of aggregates based on that table to be displayed to users of the application on many of the most commonly hit pages of the system.  The was a tolerance for a slight delay in the calculation of those aggregates, on the order of a handful of seconds under normal load. The tables looked something like the below. TriggerTable contains the transactional data, and TriggerAggregates contains the rolled up values by EntityID.

The service broker queues don't need to be overly complex, we setup a pseudo-monolog much like the one Dave Wentzel describes here. I'm going to use JSON for the message rather than validated XML because I hate dealing with XML, and that seems like a good enough reason. First thing first, we have to setup the database for service broker, and create a basic set of message type/contract/queues/services.

Since we're modeling a monolog here, we'll create the stub activation proc on the sending queue, and enable it for activation.

The trigger that sends the messages doesn't have to be very complicated. It just grabs a distinct list of EntityIDs, packages them up in JSON for transport, and sends the message on the queue. I prefer to limit the size of the messages I send, so I've included some looping behavior here to avoid sending through massive blobs in the event of a large data modification query.

Since the aggregates in the example aren't particularly complex I've implemented them directly in the receive proc. In a production environment I'd put that logic in a separate proc so that it could be re-used for any other ad-hoc recalculations.

Once this is all setup you can test that it's working by running some data modification queries against the TriggerTable, and then checking the contents of TriggerAggregates. I've included brief WAITFORs to allow for the asynchronous processing.

While in a trivial case such as the one in this example this is definitely an overkill from the complexity standpoint, this pattern can be used to facilitate lots of functionality that would be driven by traditional triggers, making it asynchronous and allowing incoming transactional data to flow unimpeded.

No comments:

Post a Comment