Tuesday, September 11, 2018

T-SQL Tuesday #106 - "Asynchronous" Triggers with Service Broker


Steve Jones has been kind enough to host this months T-SQL Tuesday, with the topic of triggers.  Everyone reading this is probably well aware of the downsides of triggers, chief among them the increased potential for dead locks and the difficulties they can pose when debugging.  A pattern we've successfully used to minimize these issues is to leverage service broker to move the business logic that would be in a traditional trigger outside of the scope of the transaction.

In one prominent case there were a number of factors that lead this to be a good solution.  We had a fairly active transactional table that was both wide and tall, and a business requirement that called for a large number of aggregates based on that table to be displayed to users of the application on many of the most commonly hit pages of the system.  The was a tolerance for a slight delay in the calculation of those aggregates, on the order of a handful of seconds under normal load. The tables looked something like the below. TriggerTable contains the transactional data, and TriggerAggregates contains the rolled up values by EntityID.

The service broker queues don't need to be overly complex, we setup a pseudo-monolog much like the one Dave Wentzel describes here. I'm going to use JSON for the message rather than validated XML because I hate dealing with XML, and that seems like a good enough reason. First thing first, we have to setup the database for service broker, and create a basic set of message type/contract/queues/services.

Since we're modeling a monolog here, we'll create the stub activation proc on the sending queue, and enable it for activation.

The trigger that sends the messages doesn't have to be very complicated. It just grabs a distinct list of EntityIDs, packages them up in JSON for transport, and sends the message on the queue. I prefer to limit the size of the messages I send, so I've included some looping behavior here to avoid sending through massive blobs in the event of a large data modification query.

Since the aggregates in the example aren't particularly complex I've implemented them directly in the receive proc. In a production environment I'd put that logic in a separate proc so that it could be re-used for any other ad-hoc recalculations.

Once this is all setup you can test that it's working by running some data modification queries against the TriggerTable, and then checking the contents of TriggerAggregates. I've included brief WAITFORs to allow for the asynchronous processing.

While in a trivial case such as the one in this example this is definitely an overkill from the complexity standpoint, this pattern can be used to facilitate lots of functionality that would be driven by traditional triggers, making it asynchronous and allowing incoming transactional data to flow unimpeded.

Tuesday, April 10, 2018

TSQL2SDAY #101 - The tools I rely on


I promised myself that I would start blogging about SQL this year, and there's no better place to start then T-SQL Tuesday!  Unfortunately I missed that this month the submissions are due on GMT, so I apologize for the rushed nature of this post.

This month's topic is "My Essential SQL Server Tools" and is being hosted by Jens Vestergard.




SQL Prompt is the most crucial tool that I use on a daily basis.  I have become almost wholly dependent upon the auto complete, which is leaps and bounds ahead of Intellisense.  We have a number of databases with 10s or 100s of thousands of objects, which make Intellisense crawl and eventually crash SSMS(Thanks 32-bit memory limit).  SQLPrompts autocomplete doesn't work well in those databases either, but it does allow you to exclude them based on a wild card match, which allows SSMS to proceed uncrashed. 

SQL Prompt also contains a feature which allows you to define and apply code formatting templates.  The most useful application for this is formatting queries generated by the application, or by a fellow developer whose code doesn't conform to your specific OCD idiosyncrasies.  I find this very helpful to increase readability and thus allow for easier debugging.  It has a pretty powerful set of options for you to fiddle with to get your preferred code style exactly right.

A final essential feature are the execution warnings that force you to go through a dialog box when executing DELETE or UPDATE statements without a where clause as well as creating or altering a procedure that includes those statements.  We've actually mandated that any employee with write access to a production database have SQL prompt installed, and have this feature enabled.  It's definitely less annoying to have to click "OK" a few times a week than it would be to restore a production database from backup while client support is breathing down your neck.

It also offers many quality of life improvements such as tab coloring, tab history, and code snippets that are definitely beneficial, but don't quite make it to the "Essential to my day" standard

SentryOne Plan Explorer

In the category of free tools that I use (almost) every day there is one standout.  It's a frequent occurrence that I'm presented with a query and the question, "Why is this query slow?". Professional courtesy prevents me from responding "So many reasons, that query is an abomination, please make the hurting stop", so I turn to SentryOne Plan Explorer for a less emotional answer.  It offers a simple intuitive interface for digging into execution plans and makes the most expensive operations easy to find.  


I'm a big believer in collaborative source control, it allows our team to make better design decisions and catch many bugs prior to code going to QA.  GitHub pull requests offer an excellent platform for this sort of collaboration.  We can easily offer comments on specific lines of code or larger architectural decisions.  It also integrates nicely with our project management software, allowing us to easily see WHY we wrote the code that we did.  The power of GitHub is matched only by its learning curve.  The command line can be more than a little bit intimidating, so SourceTree provides much needed training wheels as one acclimates to the complexities of a DVCS.