Monday Coffee 2017-01-30

Wow, what a week last week was.

The biggest thing that happened was that I got a session at SQL Saturday Iceland! They had loads of submissions and could only pick around 20 so very chuffed to have been one of those selected, cue smug face.

The event is on the 17th of March in Reykjavik and I’ll be talking about one of my favourite subjects, containers. I’ve pretty much written the presentation at this point but I’ve still got over a month to polish it. One thing I am going to do is get my demos down, I’ve got an Azure account so I’ll be building an environment there to use. However from the advice that I’ve been given I’ll also be recording videos of each of the demos so if the presentation technical gremlins raise their heads I won’t have to worry.

It feels like I’ve lived and breathed containers over the last year, it’s a subject that I feel passionate about. I don’t want to go over the top but I think that they’re a game changer, especially when it comes to Dev & Test environments.

Say you have 20 dev and test guys, all working with apps that reference databases in SQL Server. Now whenever a release goes out to production, those environments will also have to be updated to make sure that everyone is working on the same version. There’s tonnes of software out there to automate deployments but what if you could run a powershell script and bam! everyone is on the working on exactly the same version of the database(s) within minutes.

That’s what containers give you, the ability to quickly and easily spin up custom instances of SQL Server.

Another good example of when containers come into their own is patching. Microsoft now recommends applying every CU to your production environment. Are you really going to have to get all those other instances of SQL up to the same patch level as well? You should, no point in testing on one patch level of SQL and then deploying to another.

If your dev guys are using containers it’s no problem. They can blow away their old containers and deploy new ones at the higher patch level. Great stuff eh?

There are other advantages, hey there are disadvantages too but I’m not going to go into all of them here. You’ll have to come to my session 🙂

Have a good week!

Friday Reading 2017-01-26

Is it really still January? It seems like this month is dragging on a bit. Anyway, this week I’ve been reading…

Nearly everything has changed for SQL Server Powershell
The Scripting Guys at Technet go over the changes to SQL Server Powershell.

Learning Redis the hard way
Interesting post about the problems that Trivago had implementing Redis in their production environment.

Hack attaching a SQL Server Database
Argenis Fernandez details a way of hack-attached database with NORECOVERY, handy bit of knowledge to have.

How to secure your passwords with Powershell
Shawn Melton goes through how to store a password to be used in powershell scripts

Did someone say SQL Server AUTOPILOT?
Garry Bargsley takes us through using the undocumented DBCC AUTOPILOT command, cool stuff

And finally, why it’s called a patch

Have a good weekend!

Transaction log shipping in SQL Server on Linux

SQL Server on Linux has been out for a bit now and I’ve played around a little (see here) but haven’t really used it in “anger” nor will I for the foreseeable future if I’m honest. Nevertheless it’s an area that I find very interesting as I know very little when it comes to the Linux operating system and as it’s such a huge area, it’s something that I want to learn more about.

I feel the best way to learn is to actually try and do something with it. Sure, I could sit down and read articles on the web but I learn best by doing. So I began to think about what would be the first thing I’d try and do if presented with an instance of SQL Server running on Linux that I had to manage.

Right, well being a DBA, setting up backups and restores I guess but I want something a little more involved. How about setting up a warm standby instance! Log shipping! It’s perfect as it’s a fairly simple process within SQL but should teach me a bit about the Linux environment (copying files etc.) as SQL on Linux doesn’t have an Agent so this have to be done manually.

But before I go through how I set this up…


  • I have published this as a purely academic exercise, I wanted to see if I could do it.
  • At no point should this be considered to have followed best practices.
  • This should NOT be used in a production environment.
  • There are probably better ways of doing this, if you have one then let me know.

Here goes!

What I’m going to do is setup two instances of SQL Server running on linux and log ship one database from one to another. So the first thing I did was get two VMs running Ubuntu 16.04.1 LTS which can be download from here.

Once both servers were setup (remember to enable ssh) I then went about getting SQL setup, I’m not going to go through the install in this post as the process is documented fully here. Don’t forget to also install the SQL Tools, full guide is here.

N.B. – when installing the tools I’ve always just run:-

sudo apt-get install mssql-tools

The link will tell you to add unixodbc-dev to the end of the statement but that’s caused me issues in the past.

You’ll also need to run:-

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc

And then log out and log straight back in otherwise you won’t be able to run sqlcmd

Anyway, once that’s setup verify that you can connect to both instances, either by sqlcmd on the server or through SSMS remotely.

Ok, now we need to create folders on both servers to hold the scripts and backups needed. So in your favourite shell (I’m using bash on windows), ssh into your first server and run: –

mkdir SQLScripts
mkdir SQLBackups

This will create two folders in your default home location, for me that’s… /home/andrew

Next thing to do is sort out access to these folders so that SQL Server can write backups to them. I found this kinda tricky if I’m honest as linux permissions are completely new to me but this is how I went about it.

When SQL Server is installed a group called mssql is created. What I’m going to do is add my user into that group and then change the ownership and group access to these folders to that group. So, run:-

sudo usermod -a -G mssql andrew

This change can then be verified by running:-

id andrew

N.B.- You’ll have to log out and then back in for this to take effect

Then we can change the permissions on the folders:-

sudo chown mssql SQLScripts
sudo chown mssql SQLBackups

sudo chgrp mssql SQLScripts
sudo chgrp mssql SQLBackups

I also need to modify what the owner and group members can do in those folders. I’ve played around with these permissions a bit and the best configuration I’ve found is set by running: –

sudo chmod 770 SQLScripts
sudo chmod 770 SQLBackups

This will allow the owner of the folder (mssql) and members of the group mssql to do what they want. More details on setting permissions in linux can be found here.

Once that’s done you can verify the change by running:-

ls -al


On server 2 run all the above scripts to setup the same folders and permissions. Once that’s done we also need to setup an Archive folder (only on server 2) to move the transaction log backups into once they are restored. So run the following (same code as above really):-

cd /home/andrew/SQLBackups

mkdir Archive
sudo chown mssql Archive
sudo chgrp mssql Archive
sudo chmod 770 Archive


Once that’s done we can initialize a database for log shipping. So in your first instance of SQL we will create a login to run the backups, create a database, create a user for the login (with membership of the db_backupoperator role), take a full backup and then take a log backup:-

USE [master];



TO DISK = 'C:\home\andrew\SQLBackups\LogShipped.bak';

BACKUP LOG [LogShipped]
TO DISK = 'C:\home\andrew\SQLBackups\LogShipped.trn';

USE [LogShipped];

CREATE USER [logshipper] FOR LOGIN [logshipper];

ALTER ROLE [db_backupoperator] ADD MEMBER [logshipper];

N.B.- note that SQL Server does recognise linux pathways. SQL thinks that the backup folder we created lives at C:\home\andrew\SQLBackups not /home/andrew/SQLBackups

Now we push these over to the secondary server so that we can restore them. To do this I’m going to use a program called scp, so back in your shell session on the first server, navigate to your SQLBackups folder and run: –

scp LogShipped.bak andrew@192.168.xx.xx:/home/andrew/SQLBackups
scp LogShipped.trn andrew@192.168.xx.xx:/home/andrew/SQLBackups

Before you’ll be able to restore the database backups we need to allow the SQL Server instance on server 2 to be able to read the files we’ve just transferred over. To do this, ssh to server 2 and run:-

cd /home/andrew/SQLBackups
chmod 666 LogShipped.bak
chmod 666 LogShipped.trn

Ok, once the files are on the secondary server,  connect to the second instance of SQL via SSMS to  restore the database and transaction log backups as normal when setting up log shipping:-

USE [master];

FROM DISK = 'C:\home\andrew\SQLBackups\LogShipped.bak'

RESTORE LOG [LogShipped]
FROM DISK = 'C:\home\andrew\SQLBackups\LogShipped.trn'

Now we need to create a login to perform the restores:-

USE [master];


ALTER SERVER ROLE [dbcreator] ADD MEMBER [logshipper];

N.B.- I’ve noticed that even though the above permissions are correct to restore the log, this won’t work with sqlcmd. The work around I have is to make the logshipper login a member of the sysadmin role, not ideal I know.

One thing I noticed when looking into this behaviour is a note on the documentation for the sqlcmd utility here:-

SQL Server Management Studio (SSMS) uses the Microsoft.NET FrameworkSqlClient for execution in regular and SQLCMD mode in Query Editor. When sqlcmd is run from the command line, sqlcmd uses the ODBC driver. Because different default options may apply, you might see different behavior when you execute the same query in SQL Server Management Studio in SQLCMD Mode and in the sqlcmd utility.

I’m going to keep researching this to see what’s going on but for now let’s continue with the setup.

Now that the initial database and transaction log backups have been restored, move them into the Archive folder setup earlier:-

cd /home/andrew/SQLBackups

mv LogShipped.bak Archive
mv LogShipped.trn Archive


Ok cool, barring some sqlcmd oddness, that’s our secondary SQL instance setup.

By the way, did you get asked to enter your password to connect to the secondary server? That’s going to be a problem for us as we want to have the log shipping process running automatically.

The way I sorted this was to setup public and private keys on the servers and then transfer the public key of server 1 to server 2. This then allows passwordless file transfers between the servers.

So on both servers run:-

ssh-keygen -t rsa

Don’t enter anything in the prompts, just keep hitting enter until you see:-


Then we transfer over the public key generated on server 1 to server 2 using the scp command:-

scp ~/.ssh/ andrew@192.168.xx.xx:/home/andrew

Then on server 2 we need to copy the server 1 public key into ~/.ssh/authorized keys. So in your home directory (or wherever you copied server 1’s public key to) run:-

cat >> ~/.ssh/authorized_keys
chmod 700 ~/.ssh/authorized_keys

The last line is important as it changes the settings of the keys folder to be restricted to the owner. Passwordless file transfer won’t work if access to the keys is too open.

Right, now we can create the scripts required to perform log shipping. So back on the first server go to the SQLScripts folder and run: –

nano BackupTLog.sql

This will create a new file and open it in the nano text editor (use other editors at your own peril!). In the file drop in:-

USE [master];


SET @DateStamp = CONVERT(NVARCHAR(10),GETUTCDATE(),112) + '_'
                                + CONVERT(NVARCHAR(2),DATEPART(HOUR,GETUTCDATE()))
                                + CONVERT(NVARCHAR(2),DATEPART(MINUTE,GETUTCDATE()))
                                + CONVERT(NVARCHAR(2),DATEPART(SECOND,GETUTCDATE()))

SET @DBName = 'LogShipped';
SET @sql = 'BACKUP LOG [' + @DBName + '] to disk = ''C:\home\andrew\SQLBackups\' +
                   @DBName + '_TL_Backup_' + @DateStamp + '.trn''';

EXEC [master].dbo.sp_executesql @sql;

Nice and easy, this simply will create a time stamped transaction log of the database.

So we have the SQL script to backup the database, let’s create the script to move the transaction log backups from server 1 to server 2. So back in the SQLScripts folder on server 1:-


And drop in:-

cd /home/andrew/SQLBackups

file=$(ls -Art | tail -1)

rsync --chmod=666 $file andrew@192.168.xx.xx:/home/andrew/SQLBackups/

Now what this is doing is selecting the most recent file in the backups folder and then using a program called rsync to copy the file to server 2.

The reason I am using rsync is that I ran into the same issue with permissions that we corrected when copying the initial backups to server 2. The file that’s copied is owned by myself and as such the instance of SQL Server on server 2 couldn’t access it. What rsync allows you to do is setup the permissions of the copied file, so I used chmod 666 to allow everyone on server 2 to read and write the file (I know, I know).

Final script on server 1 is to run the backup and then kick off the copy, so:-


And drop in: –

cd /home/andrew/SQLScripts

sqlcmd -S . -U logshipper -P Testing11@@ -i ./BackupTLog.sql

sleep 10


The script navigates to the SQLScripts folder, takes a backup using sqlcmd, waits 10 seconds and then copies the file across.

Finally on server 1 we need to make the scripts executable so:-

chmod 770 BackupTLog.sql
chmod 770
chmod 770

OK, so let’s create the script to restore the transaction log backups on the second server. So in the SQLScripts folder on server 2 run:-

nano RestoreTLog.sql

And then drop in:-

DECLARE @FileName nvarchar(100)
DECLARE @SQL nvarchar(max)
 BackupFile VARCHAR(200),
 ParentId INT,
 Depth INT,
(BackupFile, Depth, ISFILE)
EXEC xp_dirtree 'c:\home\andrew\SQLBackups\', 10, 1
SET @sql = 'RESTORE LOG [LogShipped] from disk = ''c:\home\andrew\SQLBackups\' + @FileName + ''' WITH NORECOVERY'
EXEC sp_executeSQL @SQL;

Nice and easy again, simply using xp_dirtree to find the latest file (err..see below) in the backups folder and use that to restore the database.

Now there’s a bug in the above script that stops it from selecting the most recent transaction log backup file. Instead of mucking about with xp_cmdshell I thought a simpler process would be to archive the files after they’re used (hence the Archive folder). So we need two more scripts to move the files and one to execute the restore and move.

First, the move: –


And drop in:-

cd /home/andrew/SQLBackups

file=$(ls -Art | tail -1)

mv $file /home/andrew/SQLBackups/Archive

Very similar to the copy script created on server 1. It simply looks for the most recent file and moves it into the Archive folder. Let’s create the script to run both of them:-


And drop in: –

sqlcmd -S . -U logshipper -P Testing11@@ -i /home/andrew/SQLScripts/RestoreTLog.sql


And as on server 1, we need to make these scripts executable:-

chmod 770
chmod 770 RestoreTLog.sql
chmod 770


So we have all our scripts and a database ready to but how are we actually going to perform log shipping? These SQL instances have no agent so the answer is crontab, a task scheduler that comes with Linux.

To open up crontab run (on server 1):-

crontab -e

You’ll probably get a menu to choose your editor, if you use anything other than nano you’re on your own 🙂

Here’s what I setup on server 1:-


The code inserted is:-

*/5 * * * * /home/andrew/SQLScripts/

What this is going to do is run that log shipping script every 5 mins.

Now we need to setup a similar job on server 2 to restore the transferred log backup. So hop onto server 2 and run the same command:-

crontab -e

Here’s what I setup:-


The code inserted is: –

*/5 * * * * /home/andrew/SQLScripts/

And what this code is going to do is look for the latest file in the SQLBackups folder, restore it and move the transaction log backup into the Archive folder every 5 minutes. Because of the 10 second delay in the log shipping script, the restored database on server 2 is always going to be 5 minutes behind.

So we’re pretty much done! The last thing to do is monitor as the scripts will start to be executed automatically.

On the second instance you can run the following to monitor:-

FROM msdb.dbo.restorehistory h
INNER JOIN msdb.dbo.backupset s ON [h].[backup_set_id] = [s].[backup_set_id]
INNER JOIN msdb.dbo.backupmediafamily m ON [s].[media_set_id] = [m].[media_set_id]
ORDER BY [h].[restore_date] DESC


You will also be able to check the system log on the Linux boxes by running:-

tail /var/log/syslog

And you can limit it to the crontab output:-

grep CRON /var/log/syslog

Remember, it’ll take 10 mins for the restores to kick off as the way this has been setup is that the restore script will restore the transaction log backup taken 5 mins previously. You can see this above as the timestamp on the log backups is 5 mins behind the time of the restore.

Phew! If you’ve made it this far then fair play to you. That was long and involved but good fun to try and figure out (if at times completely infuriating! 🙂 ). I know it’s very rough around the edges but I’m genuinely chuffed that I got it working and as the whole point was to learn more about the linux operating system, I feel it’s been worthwhile.

Thanks for reading!

Monday Coffee 2017-01-23

I like to think of myself as a fairly hardworking, motivated person. However I recently met someone who regularly gets up at 5am, works a 12 hour day and then (somehow) works out in the evening. They also ran 18km before 10am last Saturday. You know what I was doing at 10am on Saturday? SLEEPING!

Madness nearly overcame me as I started thinking, maybe I should be getting up earlier?

But when I really thought about it, I knew that that is never going to happen. I’m simply not a morning person, I do most of my work in the late evening. Everyone has different work patterns and is more productive at different points during the day. So I’m going to use this person as motivation for me to get more stuff done in my own work schedule and not get disheartened because whilst they’re out blitzing 18km, I’m sound asleep.

Plus, I’d never be able to get up at 5am.

Have a good week!

Friday Reading 2017-01-20

Been an interesting week! My site has had the most views ever (so chuffed) and during all that I’ve been reading…

Microsoft’s Cyber Defense Operations Center shares best practices
Microsoft shares details some of their best practices for how they respond to cyberthreats in real time

CPU Scheduling Basics – Windows and SQL Server
Anthony Nocentino gives us a refresher on CPU scheduling

50+ free Data Science books
Looking at moving into a career in Data Science? Hey, even if you’re already working as a Data Scientist this is a great resource.

Viewing rolled back transactions after a crash
Paul Randal shares code on retrieving details of transactions that were rollback after a failure

Caching at reddit
A Senior Engineer goes through how caching has been implemented at Reddit

Have a good weekend!

The two types of IT worker

Last week I published this post about something a colleague noticed when playing around with a SQL Server instance running on Linux. Fairly innocuous and I was chuffed when I saw that the guys over at had linked to it in their weekly newsletter.

As the day wore on, the post attracted a couple of *ahem* interesting comments about how Linux works. A couple were helpful and a couple were, well not so helpful.

I’m not going into the overall tone of the comments as William Durkin (b|t) pretty but sums it up here but it did remind me of something I used to think when I first started out learning my trade as a SQL Server DBA.

I firmly believe that there are two types of IT worker, which become apparent when they/we are asked for help. One will respond along the lines of “I can’t possibly explain to you now, it’s rather complicated” whereas the second group will say something like “Ah it’s not too hard. Here, let me show you”.

I’ve always tried to be of the latter type, in fact it’s why I started writing this blog. I admit there have been times when I’ve been busy that I haven’t been able to show a colleague what they were asking about, but I’ve always tried to make that up as soon as I possibly could.

My advice? Drop the ones that you meet who fall into the first group and cultivate relationships with members of the second. The only other thing to think about is, which group do you fall into?

Thanks for reading!

Monday Coffee 2017-01-16

I’ve been a scribbler for as long as I can remember, in fact I’m not sure that I write everything down because I have a bad memory or if I have a bad memory because I write everything down.

About a year ago I stopped using notepads and tried to go completely digital. I bought a premium Evernote subscription and started using Wunderlist for reminders. I have to say that it was a difficult transition for me, I’m so used to jotting things down constantly that it was difficult to start typing away on my tablet or phone. I found that I’d have to force myself to do it as it just didn’t feel as natural but slowly it became easier.

Now a year later, my Evernote account is a hugely valuable resource. One of the best tools for it is the chrome extension that allows you to clip web pages. Combine those notes with the ones that I entered in manually from my many notebooks and I honestly don’t know what I’d do without it.

That being said, I have found I’ve started taking rough notes back on a notepad (which then I copy into Evernote if I find I need them) so when a colleague showed me this Kickstarter Project I immediately backed it. I’ve never backed anything on Kickstarter before so I really don’t know what to expect. Hopefully it’ll turn up but until then I’m back scribbling away on a traditional pad of paper.

Have a good week.

Friday Reading 2017-01-13

Ahh, is it really Friday the 13th?!! So before Jason gets me, I’ll be reading…

A whole day of PowerShell & SQL
Join Rob Sewell and Chrissey LeMaire at SQL Sat Vienna on the 20th of Jan for a pre-con on PowerShell and SQL Server

Why open offices are bad for us
I’ve only ever worked in open plan offices and I have to admit, the noise can be an issue.

The MongoDB hack and the importance of secure defaults
The cynic in me says that if this happened to a MS product, how much of a backlash would there have been?

Ooops! Was that was me?
Monica Rathbun details a mistake she made setting up SQL Server alerts and sets a challenge for others to blog about mistakes they’ve made whilst working with SQL Server

An Introduction to SQL Server Containers
Paul Stanton from Windocks runs through a quick intro into the world of containers running SQL Server

Have a good weekend!

Killing databases in SQL Server on Linux

Bit of fun this week with something that a colleague of mine noticed when playing around with SQL Server on linux.

The first thing you do when playing with a new technology is see how you can break it right? Always good fun 🙂

So I’m going to break a database in a SQL Server instance that’s running on linux. I’m not going to go through the install process as there’s a whole bunch of resources out there that detail how to do it. See here

Once you have your instance up and running, connect it to as normal in SSMS and create a database with one table:-


USE [GoingToBreak];

 ColA VARCHAR(10),
 ColB VARCHAR(10),
 DateCreated DATETIME);

Then insert some data: –

(ColA, ColB, DateCreated)
GO 1000

What we’re going to do is delete the database files whilst the instance is up and running. Something you can’t do to a database running in an instance of SQL on windows as the files are locked.

First, find where the files are located: –

USE [GoingToBreak];

EXEC sp_helpfile;


Then jump into your favourite terminal client (I’m using bash on windows) and connect to the linux server.

Then run: –

cd /var/opt/mssql/data/


Ok, so now to delete the files we can run: –

rm GoingToBreak.mdf GoingToBreak_log.ldf


And the files are gone! That database is deader than dead!

But….wait a minute. Let’s have a look back in SSSMS.

Run the following: –

USE [GoingToBreak];

SELECT * FROM dbo.Test;

Hmm, data’s returned! Ok, it could just be in the buffer pool. Let’s write some data: –

USE [GoingToBreak];

(ColA, ColB, DateCreated)
GO 1000

What? It completed successfully??? Err, ok. Let’s run that select statement one more time…

SELECT * FROM dbo.Test;

Ok, that returned 2000 rows right? Hmm, what happens if we run CHECKPOINT?



Errr, ok. That worked as well.

Alright, enough of this. Let’s break it for sure. Back in your terminal session run: –

sudo systemctl restart mssql-server

Once that’s complete, jump back into SSMS and refresh the connection: –


Ha ha! Now it’s dead. That’s pretty weird behaviour though eh? I expect there’s a linux person out there who can explain why that happened ‘cos I’m really not sure.

EDIT – Anthony Nocentino (b|t) has come back to me and said that the files when deleted get unlinked from the directory so we can no longer see them but the SQL process will still have them open; hence being able to execute queries. Once the instance is restarted the file handle will be released, the underlying blocks and inodes get deallocated; hence the database going into recovery pending. Thanks Anthony!

One thing I do know is that SQL databases on linux will continue to allow queries to be executed against them after their underlying files have been deleted. Pretty worrying imho, you could have a problem and not even know about it until your next server restart!

Monday Coffee 2017-01-09

I’m almost back into the swing of things now after the Xmas break, that holiday feeling has just about left me.

For most people, there’s a good break over the Xmas period but most IT workers that I know had to be on-call for some or most of the holiday period. Being on-call is part and parcel of being a DBA and I honestly don’t mind doing it but I guess it’s really dependent on how much you get called! Ever been called when out at a restaurant? It does kinda suck…

I’ve been in roles where being on call that evening pretty much guaranteed that I’d be getting a call, which I admit, I wasn’t too keen on. Especially when the factors that lead to a problem with a system were out of my control and I’d just be firefighting the whole time.

I’m lucky now that my current role has allowed me to build a system that very rarely has problems and as such, I very rarely get called. Maybe that’s what being a good DBA (I like to think I am anyway) comes down to?

All the training courses, read articles & blogs, all the extra work that we put in is done so that we get bothered less? 🙂

Have a good week!