Connecting to SQL Server from Mule ESB

I normally blog about SQL Server, but I thought I’d share my experience of using SQL Server from MuleESB. If you haven’t used Mule before, it’s an open source ESB, which also has a paid for Enterprise version. I’ve been using it for almost three years now and have been really impressed with how quickly we can create quite complex applications. It’s very fast, easy to use and free if you use the community version; I also have some experience with Java which helps. I tend to use Mule for applications that process web requests or when integrating with existing systems and SQL Server Integration Services (SSIS) for moving data around.

I’ve been working on a MuleESB program which needs to log millions of requests a day, and with the recent advancements in SQL Server, it’s now the obvious choice for storing almost any type of data. You can also run SQL Server 2017 on Linux, which will please a lot of Mule users.

Mule previously had a MSSQL connector but that has been deprecated and replaced with the Generic Database Connector, which works with most JDBC drivers. To get started, we’ll need to download the latest driver from Microsoft and add it to our project. We can then configure the database using a standard connection string.

Download and add the JDBC driver

At the time of writing, the latest is JDBC 6.2; which you’ll need if you want to use SQL Server 2017.

Download Microsoft JDBC Driver for SQL Server

Add to your Mule Project using Add External JARs…. and selecting either mssql-jdbc-6.2.2.jre7.jar or mssql-jdbc-6.2.2.jre8.jar; depending on your JRE.

Note: There isn’t a JRE 9 driver; nor does AnyPoint Studio work with Java 9 at the moment.


Configuring the Database Connector

The database connector is surprisingly easy to configure. We simply add a standard connection string and reference the JDBC driver, which is the same regardless of which JDBC driver you use.


Here’s an example which connects to a local instance which contains a database called TestDatabase. I’ve added the port but this isn’t required, unless it was changed from the default of 1433.


The above is fine for testing but I would suggest that you use parameters that are stored in a central configuration file to make your life easier. This will enable you to automatically set the environment settings using a local config file or manually change between Dev and Production by commenting each section out. I would also advocate using an application name, as this will help you or the database team to identify your application; rather than it simply showing as a JDBC program.


Once this is set, you can store the following parameters in another mule config file. I tend to use global_properties.xml

 <!-- Test Database Settings -->
 <global-property name="mssql.server" value="" doc:name="SQL Server Instance" />
 <global-property name="mssql.port" value="1433" doc:name="SQL Server Port" />
 <global-property name="mssql.database" value="TestDatabase" doc:name="Database Name" />
 <global-property name="mssql.user" value="TestUser" doc:name="Username" />
 <global-property name="mssql.password" value="reallySecure!" doc:name="Password" />
 <!-- Production Database Settings 
 <global-property name="mssql.server" value="" doc:name="SQL Server Instance" />
 <global-property name="mssql.port" value="1433" doc:name="SQL Server Port" />
 <global-property name="mssql.database" value="LiveDB" doc:name="Database Name" />
 <global-property name="mssql.user" value="MuleApp" doc:name="Username" />
 <global-property name="mssql.password" value="superSecure!" doc:name="Password" />

Other Considerations

  • By default, the maximum database connections is 5, unlike .NET where it’s 100. If you think you’ll need more, then make sure you set it higher.
  • If you’ll be using the connection frequently; I would strongly recommend using a minimum connection pool size to lower the overhead on both Mule and SQL Server.
  • If you’re using the standard port, then you don’t need to specify it but it doesn’t hurt and future proofs your application.
  • Only bother with a re-connection strategy if you really need to the data. If it’s just logging, then consider throwing it away rather than overloading a server which may already be suffering from being overloaded.
  • If you’re load balancing the connections, I would also suggest setting the Load Balance Timeout, otherwise the connections will be sticky and won’t automatically balance after a server restart etc.
  • Again, I would set an an application name to help when diagnosing issues.
  • Consider enabling delayed durability when saving logging data as it will reduce the time in SQL Server and free up database connections quicker. I would also specify for the specific stored procedure in use rather than forcing it for the entire database. With Delayed Durability comes the possibility of data loss, so make sure you understand the risks.
  • Consider using memory optimized tables for very busy applications, then process the data using a separate process. This could be called from Mule or via the SQL Server Agent. Again, if the data isn’t important, I would opt for a SCHEMA_ONLY table; which is faster but you’ll lose the data should SQL Server restart. If this isn’t an option, you can use the default of SCHEMA_AND_DATA which will commit the data to disk.
  • Use a stored procedure or parameterised queries when communicating with SQL Server as this will enable SQL Server to reuse execution plans.
  • If you decide to use memory-optimized tables, consider a natively compiled stored procedure which will reduce the overhead of the queries and improve the performance.

Quick Test

I’ve created a HTTP service listening on 8081 that will then connect to the database, retrieve the server and service name, then return it as JSON.



Here’s the example results in JSON.


Final thoughts

Microsoft have made the cheaper (or free) versions of SQL Server very powerful by allowing us to use features which were previously enterprise only. For me, it does everything I need from a database. In-memory tables that can handles thousands of updates a second, to large clustered columnstore tables that can handle real-time analytical queries that drive dashboards.

It would be nice to see the JTDS driver get more attention, as it’s much faster than ODBC but at the time of writing, doesn’t work with the latest versions of SQL Server and doesn’t have much documentation / support. Maybe Microsoft could release and maintain JTDS driver in parallel to JDBC?

Utilizing the cache scope could also prove quite powerful as it will reduce the resource requirements of SQL Server, whilst improving the performance of your Mule application. I’ll try to blog on this soon but feel free to pester me if I haven’t.

Further reading

Using the JDBC driver | Microsoft

Building the connection URL | Microsoft

Database Connector | MuleSoft

Cache Scope | MuleSoft

Defining Durability for Memory-Optimized Objects | Microsoft

Control Transaction Durability | Microsoft

Natively Compiled Store Procedures | Microsoft


Tracking replication throughput

It’s a right laugh when replication goes wrong, if you haven’t been there and use replication heavily, you’re in for the night / early morning of your life. Like any issue, the cause of replication problems could be one of many, but checking what commands are being passed through the distributor is always worthwhile, as an inefficient query could be updating the same data over and over, putting unnecessary strain on the distributor, infrastructure or the subscribers.

The following queries check the transaction counts per minute, with the second query breaking it down by article.

Indexes for SQL Server replication

Distribution databases need to be administered and maintained like any other database, which includes indexes. Microsoft have used index hints in several stored procedures, which limits us slightly, but these indexes have worked well for me on a number of systems and continue to do so.

Indexes for transactional replication

Indexes for merge replication

Comparing SQL Server configurations with Aireforge Studio

For ease of maintenance and performance reasons, we always aim to standardised our SQL Server estates (some of which contain over 100 instances). Everything is synchronised for their specific workload pools, so all search servers and all reporting servers are identical etc. Not only do we standardise the hardware or the VMs, but we ensure that every setting bar the IP and name are identical. This is easy is to achieve if you’ve recently built your estate from clones, but after a while the server configuration can drift, which could impact the performance and / or stability.

When you have an issue on standardised kit, your first question is usually what’s changed or how is that server different to the others. With over 100 instances to check, this can take a while to diagnose, but you usually find that a setting has changed or a job has been disabled. If you don’t find anything, then you still have an answer and can focus your time elsewhere. You can get a long way with scripts, either dumping to a central table or using CMS and / or using policy-based management, but we found that these approaches still didn’t cut it.

Aireforge Studio instantly compares a large number of settings and uses some logic to ignore false positives, such as differences with server names within jobs. The list of what it compares increases with every release but here’s an overview.

What does Aireforge Studio compare?

  • Naming (server, instance etc.)
  • Version information (version, edition, service packs, cumulative updates etc.)
  • Instance configuration settings (memory, fill factor, max worker threads, recovery interval, CLR settings, compression settings etc.)
  • Instance properties (clustering info, edition, service pack & patch level information etc.)
  • Database settings (auto shrink, snapshot isolation, page verify option, CDC, CT, compatibility level, recovery model etc.)
  • Database file settings (auto growth, initial size, naming, state)
  • Users
  • Registered servers
  • Assemblies (creation / modification date, CRL name etc.)
  • Service information (account, start up type (manual or automatic))
  • Registry information (Port, working directory etc.)
  • Server (DDL) triggers
  • Endpoint information
  • SQL Server Agent Jobs


Examples of what you can use it for.

  • Full comparisons or partial comparisons; just checking jobs, versions, users etc.
  • Synchronising the estate. Keep all of your servers synchronised, taking some of the guess work out of service issues and misbehaving servers.
  • Auditing the estate. You can easily compare every server (limited to 500 but contact the team if you need more).
  • Testing / Performance Tuning. Create a backup of the configuration before making changes during testing. Store the output with your test results so you can look back and recreate the exact configuration during the tests.
  • Compare single output files to check for differences over time.
  • Share configurations with colleagues or use saved configurations as a reference.
  • Create example files to distribute with your software to aid installations.
  • Use Aireforge Studio to remotely check customer configurations, reducing support costs and turnaround times.


If you would like to download Aireforge Studio, you can do so by using the following the link.

Download Aireforge Studio


Testing WMI for Redgate SQL Monitor

I tend to use Redgate SQL Monitor when diagnosing hard to find issues or when helping teams to better understand their estates. Sometimes, we run into issues with the network during the installation which is usually down the Windows Management Instrumentation (WMI) ports. SQL Monitor uses WMI calls to get all the non SQL Server information from Windows, such as disk, network and CPU usage.

It’s not completely obvious if connectivity is the issue and testing via the application can be difficult due to connection time outs and retries etc. Here’s where WMI Tester comes in handy. It’s a free download that doesn’t require an installation or a separate runtime.


  • Download the zip from
  • Unpack the zip and run WMITest.exe or read PaesslerWMITester.pdf for more information.
  • For testing, I tend to use an admin account to make sure WMI works, then try the domain account used for SQL Monitor to test that the privileges are correct.
  • If it works, you’ll get some results from the default command. If not, you’ll get an error.

If you’re using an admin account and it isn’t working then check the following:

  • Check that the services are started on the remote machine
    • Remote Procedure Call (RPC) should be running and set to start automatically.
  • Check that the ports are open on the local firewall and any in-between.
  • Consider using a fixed port for WMI.


Useful links

Redgate help covering port numbers.

Setting Up a Fixed Port for WMI


SQL Server on Linux and transactional replication

Many companies use transactional replication to scale out their database reads; which can get quite large and very expensive. Costs can be reduced by moving to SQL Server Web Edition, which is extremely cheap for what you get, but after that you only really save money by reducing the server count.

As SQL Server 2017 now runs on Linux, we have another opportunity to reduce costs by removing the Windows Server licence completely and switching to Linux (I’m using Ubuntu).

Implementing (GUI)

Microsoft have stated that transactional replication isn’t supported on Linux and we’re not sure if they intend to support it in the future. This means that if you try to add the server to a publication, you get the following message. So you can’t use the GUI and it also means that you can’t use pull, as the necessary files won’t be there.


Implementing (Scripts)

So the GUI doesn’t work and you can’t use pull subscriptions but there’s no reason why you can’t use scripts to configure a push subscription.

EXEC sys.sp_addsubscription @publication = 'xTEST', @subscriber = 'SQLOL-01', @destination_db = 'xDB', @subscription_type = N'push', @status = N'subscribed';
EXEC sp_addpushsubscription_agent @publication = 'xTEST', @subscriber = 'SQLOL-01', @subscriber_db = 'xDB', @subscriber_security_mode = 0, @subscriber_login = 'repl_user', @subscriber_password = 'grwX3bNuPyYf';

Ideally we’d use integrated security for the replication agent, which could be possible once AD for SQL Server on Linux has been configured, but that’s another test for another day.

Once these scripts have returned, check that the replication agents are running and the new Linux subscriber is either waiting for the initial snapshot or initialising if you have immediate_sync set to true. 

Example Savings

Using the Azure calculator as a guide, I’ve selected a D5 v2 : 16 Cores, 56GB RAM for 1 month. In terms of our sweating assets, I’d prefer 72GB of RAM but you can’t win them all. Anyway, based on this configuration and ignoring SQL Server licencing costs for now; moving to Linux would reduce our VM costs by 43%!!!!

Windows vs Linux

Windows OS Only  $  1,382.35  £  1,030.28
Linux OS Only  $     785.66  £     585.56
Savings from Linux  $     596.69  £     444.72

Slightly off topic but as I’ve mentioned Web edition already, moving to SQL Server Web Edition for the same VM would reduce SQL Server licence costs by 43%, which means that you could have 3 x SQL Server on Linux (Web) servers for the same price of 1 SQL Server Standard on Windows.

SQL Standard vs Web

Windows SQL Server Standard  $  2,572.75  £  1,917.50
Windows SQL Server Web  $  1,477.58  £  1,101.26
Savings from Web  $  1,095.17  £     816.24



Replication wish list

I’ve worked on a few system that use replication and the same issues tend to crop up. There’s a discussion on SQL Server Central about what needs to be improved, so I thought I’d share my experiences

  • Better default indexes (See blog)
  • Built in canary tables for monitoring latency.
  • I created some scripts I call REPLMon for this and published it on GitHub.
  • QueryStore for distribution databases
  • Fix the silent fail issue where replication can timeout and fail to create tables or insert \ delete SPs.
  • Increase the compression limit for snapshots, maybe compress each article rather than the whole snapshot.
  • Reduce \ eliminate locking when creating snapshot.
  • Why is the compatibility level of the distribution database 2005?
  • Ability to create pull subscription jobs on Web edition (here’s my scripts)
  • Ability to synchronise the indexes between the publisher and the subscriber. So new indexes and changes are automatically replicated out.
    • I intend to add this to REPLMon at some point.
  • Improved logging and diagnostics for errors.
    • Better ability to see command information.
  • Ability to analyse the transactions per object (possible with scripts) to help identify tables that are being updated too often and causing unnecessary strain.
  • Ability to optimise the order of the snapshot (inspect the dependency and reorder using sp_changearticle and sp_changemergearticle).
    • Through Aireforge we’ve written but not released something that does this. If people are interested, let us know and create a SSMS plug-in.


Scripting pull subscriptions (SQL Server replication)

I’ve created some simple scripts for creating subscriptions and their pull jobs on single or multiple subscribers. It makes life much easier to use scripts, especially if you’re bringing in multiple servers and / or the subscribers are running SQL Server Web edition as they don’t get created automatically via the GUI; which can be confusing.

There are various tools to run scripts across multiple servers, my favourites are SQL Server Central Management Servers or if you want something more advanced and easier to use, try SQL Multi Script from Redgate.

You can also use Aireforge Studio to compare the jobs afterwards, making sure they were created properly and are running. It’s also useful for flagging up other configuration inconsistencies such as missing database objects, users or database and instance configuration settings.

Create local subscriptions and generate publisher scripts

Create the local subscriptions and generate the script for adding the subscriber to the publisher (this will need to be copied out and run against the publisher database).

Create and start the replication agent jobs

Remember to locate and set the location of DISTRIB.EXE. I’ve also included an option to set an alternative snapshot location. Very useful for remote servers, especially if you distribute the snapshot files using DFS-R (search here until I blog about it).

Now check that the agent is working properly and either waiting for the initial snapshot to be created or propagating out any active snapshots.

Please let me know if you spot any issues or would like to improve the scripts.

Simple things SQL Servers is missing

I’m writing an article covering common SQL Server configuration issues and as I write about how Microsoft have addressed some of these in SQL Server 2016, I started thinking about what they’ve missed or what they should include in the next release, be it SQL Server 2018 or version 14.

RCSI as the default isolation level

Read Committed Snapshot Isolation Level should really be the default for all new SQL Server databases and Microsoft should encourage this by changing the default database option for Is Read Committed Snapshot On to true. Upgraded databases would continue to have this set as false and new databases would also have the option to use the old version if required.

Built-in maintenance

In the old days we had our own database maintenance scripts and now most people just use Ola Hallengren scripts (because they’re awesome) but these should have been shipping with SQL Server for years. Even if it’s a maintenance plan (rather not), SQL Server should ship with cross database jobs that handle index maintenance and integrity checks automatically. Either using a default schedule or a configured schedule during the installation.

Increase Cost Threshold for Parallelism

The cost threshold for parallelism should be updated to reflect modern hardware. I’m not sure how it came to be 5, but I understand it’s a number based on late nineties hardware. I generally set this to a minimum of 30, sometimes much higher but 30 would be a much better default than 5. A better solution would be to change this figure to represent seconds or milliseconds as it once did.

Format drives during install

Most SQL Server installations have dedicated drives for the data, logs and tempdb but they’re not always formatted correctly, with many systems still using an allocation unit size of 4KB, rather than 64KB. It would be useful if the installer formatted drives but a simple check of the allocation unit size and a warning would be helpful.

Server memory as a percentage

I work with the rule that a SQL Server instance should use 90% of the available RAM, unless it’s on a small system with not much to play with. On shared servers you may reduce this percentage further, but in most cases the figures your inputting are based on a percentage.

This will also fix the problem where the available RAM has changed but SQL Sever hasn’t been updated. This often happens after infrastructure upgrades where available and much needed RAM is going to waste or virtual machines that have been resized to 32GB but SQL Server is still configured to use 58GB.

Compress backups as default

It’s very rare that people would not want to compress their backups, therefore this option should be enabled by default. Saving space and time for those that backup over the network.

Comparing SQL Server configurations via the command-line

Aireforge OmniCompare 0.9.0 was the first release to ship with the CLI module. This is a separate application, allowing comparisons to be called from 3rd party applications, such as SQL Server Agent jobs, custom scripts and software deployment tools including Octopus Deploy. OmniCompare CLI will then feedback basic information using termination or error codes; much like log shipping alerts.

The information required to perform the comparison and the credentials required to access the servers are stored within the .adc export file, therefore the main OmniCompare application will be required to perform any future changes such as new servers or new / updated comparisons.


What is OmniCompare?

OmniCompare performs a growing list of comparisons, which can be restricted if certain values are known to differ, such as core count or edition (e.g. Enterprise vs Developer). All comparisons in OmniCompare are available in OmniCompare CLI.
A common use case for OmniCompare is checking production against staging and staging against test. This manual task could be automated using a SQL Server Agent job or building OmniCompare into your deployment process, checking for differences and aborting the release if any are found.

Comparing trace flags via the command-line

Creating the comparison file

Open OmniCompare and select the servers you wish to compare. Once selected, you can now choose a full comparison or restrict it to individual checks.

For the purposes of this blog, we’ll limit the comparison to trace flags.

OmniCompare selection screem
Selecting servers to check and comparisons to run

Note: Although trace flag comparisons work for both SQL Server and AzureDB, many of them are not interchangeable as AzureDB is a very different database to SQL Server. AzureDB also has many trace flags enabled by default (17 at the time of writing).


Checking current values

If no differences are found, nothing will be displayed. Although it’s often interesting to check the current values. This can be done by changing the results filter from Differences only to All.


Showing all results has highlighted that each instance has the non-standard trace flag 1222 enabled, which enabled by Redgate SQL Monitor to improve deadlock information.

OmniComapre results page showing trace flags
OmniCompare comparisons results without a filter


Exporting the comparison file

OmniCompare CLI is a lightweight application, therefore the comparison file must be created using the main OmniCompare application and exported. This exported file contains the servers to check, authentication credentials and the comparisons to perform.

To create the .adc file, select the command line icon at the top right of the comparisons section. You will then be prompted to enter a secure password and the location of the saved file.

We will save this file to d:\tf.adc.

Exporting OmniCompare comparison files
Comparison files are currently password protected

 Download OmniCompare (inc. CLI)

Using OmniCompare CLI

The OmniCompare CLI executable resides in the OmniCompare program folder. The argument for the application will be logged out if none are passed in or the help argument (-h) is use.

OmniCompare CLI Commands
omnicomparecli.exe -h

Performing a CLI comparison

To perform a trace flag comparison against the servers detailed above we simply pass in the comparison file and password.

For the purposes of this blog I have enabled full logging, although this should be used for debugging only as a lot of information is logged; especially during full comparisons of large estates.

omnicomparecli.exe -i d:\tf.adc -p <password> -v full
Successful OmniCompare comparison with error code 0

This command could then be run from a SQL Server Agent job or from software deployment tools or from your own custom scripts.

Handling differences

We will now enable trace flag 1117 on the SQL2014 instance to demonstrate how the application changes. OmniCompare CLI has highlighted the change and returned an error code of 1. This error code could be caught and an alert raised or a deployment aborted.

Unsuccessful OmniCompare comparison with error code 1
Unsuccessful OmniCompare comparison with error code 1


The information detailed above could be logged to file using the -o command, although a simpler option would be to run the comparison via the OmniCompare application.

OmniCompare difference in trace flag
TF1117 enabled for DEV1\SQL2014


Discrepancies between configuration settings or database objects can be difficult to identify, especially with larger estates or multiple developer environments. Custom scripts or policy-based management could be used to check some of these settings but they will require updating and only work if the script or rule has been created.

OmniCompare checks a large and growing number of settings, highlighting unexpected changes and preventing configuration drift between instances that should be the same.

Other useful pre-deployment checks

  • Counts of database objects (tables, columns, indexes, functions, views etc.).
  • Server triggers,
  • CLR assemblies,
  • Trace flags,
  • Operators, users or permissions have been applied,
    and database settings such as delayed durability, RCSI etc.

Download OmniCompare (inc. CLI)