3

Remotely Administering the Docker Engine on Windows Server 2016

Continuing on my series in working with Docker on Windows, I noticed that I always open up a remote powershell window when working with Docker on servers. Nothing wrong with this, if you want to know how to do that you can follow my instructions here.

However what if we want to connect to the Docker engine remotely? There’s got to be a way to do that right? Well it’s not quite so straightforward, but there is a way to do it involving a custom image downloaded from the Docker Hub (built by Stefan Scherer [g|t]) whichs creates TLS certs to allow remote connections.

Let’s go through the steps.

Open up a admin powershell session on your server and navigate to the root of the C: drive.

First we’ll create a folder to download the necessary certificates to: –

cd C:\
mkdir docker

Now we’re going to follow some of the steps outlined by Stefan Scherer here

So first, we need to create a couple more directories: –

cd C:\docker
mkdir server\certs.d
mkdir server\config
mkdir client\.docker

And now we’re going to download a image from Stephan’s docker hub to create the required TLS certificates on our server and drop them in the folders we just created (replace the second IP address with the IP address of your server): –

docker run --rm `
  -e SERVER_NAME=$(hostname) `
  -e IP_ADDRESSES=127.0.0.1,192.168.XX.XX `
  -v "$(pwd)\server:c:\programdata\docker" `
  -v "$(pwd)\client\.docker:c:\users\containeradministrator\.docker" stefanscherer/dockertls-windows
dir server\certs.d
dir server\config
dir client\.docker

dockercerts

Once complete you’ll see: –

image

Now we need to copy the created certs (and the daemon.json file) to the following locations: –

mkdir C:\ProgramData\docker\certs.d
copy-item C:\docker\server\certs.d\ca.pem C:\ProgramData\docker\certs.d
copy-item C:\docker\server\certs.d\server-cert.pem C:\ProgramData\docker\certs.d
copy-item C:\docker\server\certs.d\server-key.pem C:\ProgramData\docker\certs.d
copy-item C:\docker\server\config\daemon.json C:\ProgramData\docker\config

Also open up the daemon.json file and make sure it looks like this: –

{
    "hosts":  [
                  "tcp://0.0.0.0:2375",
                  "npipe://"
              ],
    "tlscert":  "C:\\ProgramData\\docker\\certs.d\\server-cert.pem",
    "tlskey":  "C:\\ProgramData\\docker\\certs.d\\server-key.pem",
    "tlscacert":  "C:\\ProgramData\\docker\\certs.d\\ca.pem",
    "tlsverify":  true
}

Now restart the docker engine: –

restart-service docker

N.B. – If you get an error, have a look in the application event log. The error messages generated are pretty good in letting you know what’s gone wrong (for a freaking change…amiright??)

Next we need to copy the docker certs to our local machine so that we can reference them when trying to connect to the docker engine remotely

So copy all the certs from C:\ProgramData\docker\certs.d to your user location on your machine, mine is C:\Users\Andrew.Pruski\.docker

We can then connect remotely via: –

docker --tlsverify `
  --tlscacert=$env:USERPROFILE\.docker\ca.pem `
  --tlscert=$env:USERPROFILE\.docker\server-cert.pem `
  --tlskey=$env:USERPROFILE\.docker\server-key.pem `
  -H=tcp://192.168.XX.XX:2375 version

dockerremoteconnect

Remember that you’ll need to open up port 2375 on the server’s firewall and you’ll need the Docker client on your local machine (if not already installed). Also Microsoft’s article advises that the following warning is benign: –

level=info msg=”Unable to use system certificate pool: crypto/x509: system root pool is not available on Windows”

Whatever that means. Maybe I’ll just stick to the remote powershell sessions 🙂

Thanks for reading!

0

Monday Coffee 2017-02-20

No rugby on last weekend so I didn’t have anything to distract me from working. On another note, my flat has never been cleaner!

So last week Microsoft announced Availability Groups for SQL Server on Linux.

This is a big announcement as one of the first things I noticed when playing around with a SQL instance on Linux was the lack of high availability features. (I wrote a post on manually setting up transaction log shopping here).

Microsoft hasn’t released a stripped down version of Availability Groups either. They’ve said in the post that:-

all capabilities that make Availability Groups a flexible, integrated and efficient HADR solution are available on Linux as well

So are we looking at SQL on Linux moving towards an edition that will rival its windows counterpart in features and usability? I think that’s what Microsoft’s end goal will be, a platform independent relation database system.

That for us as SQL Server DBAs mean interesting times ahead. In the future not only will we be looking at the usual options when building our SQL Server instances, we’ll be looking at the pros and cons of the supporting operating system and then making our decisions. Could lead to some interesting design room discussions.

I’ve said before that I think this is great. It’s opening up a whole new world to me as a DBA, I love learning new areas of technology so I can’t wait to get my hands on this and start playing around.

Have a good week!

0

Friday Reading 2017-02-17

Is it me or is February dragging? Anyway, mixed in with this crazy week I’ve been reading: –

SQL Server vs SQL Server vNext
Henrico Bekker runs through a performance comparison of SQL Server against SQL Server on Linux

What Are FeFETs?
Article about a new memory type (FeFETs) that Startup FMC is developing

SQL Server in a Azure VM – What’s new
Channel 9 video in which Luis Vargas goes through what’s new for SQL Server in Azure

New Flagship Xeon E5 and E7 Processors
Glen Berry looks at the new Xeon processors

Altering a Job Step on Hundreds of SQL Servers with PowerShell
Rob Sewell details how he was able to make an alteration to hundreds of backup jobs in 5 mins (in an airport no less)

Have a good weekend!

2

Viewing container logs

I’ve been going over some demos for a presentation that I’ll be giving this year and I thought I’d write this quick post about something that keeps catching me out…


…but first, a bit of shameless self promotion. I’ll be giving my session on an Introduction to SQL Server & Containers at the following events this year:-

SQL Saturday Iceland on the 18th of March
SQLBits on the 8th of April
SQL Saturday Dublin on the 17th of June

Really looking forward to all three events, containers are a technology that I’ve become quite a fan of and I’m looking forward to sharing what I’ve learnt. So if you’re about at these events come and give my session a visit! 🙂


Anyway as I was running through my demos and building containers I was running the following code: –

docker run -d -p 15999:1433 --name testcontainer microsoft/mssql-server-windows

run-container

All looks good, apart from when I go to check to see if the container is running: –

view-container

I have to run the docker ps command with the -a flag (to show all containers, the default is to only show running containers). Which means my container isn’t running, something’s gone wrong.

So to see what’s happening I can run the docker logs command to see what’s up: –

container-accept-eula

ARGH! I forgot to specify -e ACCEPT_EULA=Y when building the container! This has caught me out more times than I care to admit but it’s cool that there’s a simple command that I can run in order to see what the issue is.

Or I could just build a custom image from a dockerfile and specify -e ACCEPT_EULA=Y in that and not have to worry anymore. I’ve detailed how to do that here.

Thanks for reading!

0

Monday Coffee 2017-02-13

Man, if there was an award for procrastinating I’d definitely be in with a shout for today (I write these posts on the weekend).

So I’m half writing this sitting on my balcony with a beer, one eye on the Italy v Ireland Six Nations game and trying not to think about all the work I should be doing.

Anyway one thing I’ve been thinking about is documentation. Nobody likes doing it but oh boy did it help me out last week.

We had one of our production servers fail during a RAM upgrade. No biggie as it was a passive node in a cluster but whilst we were writing up our steps to rebuild one of my colleagues forwarded a wiki page detailing the server’s configuration. Awesome stuff, we had a complete list of what we needed to do, the thing that made me laugh was that it was written….by me.

I honestly don’t remember writing that document and yes, it was slightly out of date but it had steps on it which we would have completely forgotten to do if we didn’t have it.

So yes, writing documentation sucks but it’s one of those chores that can really help you out.

So don’t put it off, get that code/server spec/process documented. You’ll thank yourself in the end.

Have a good week.

2

Friday Reading 2017-02-10

Fun week, performed some RAM upgrades for my production SQL boxes which for one server, somehow fried its motherboard, hard drive AND backup drive, huh?? Meh, I like rebuilding servers…Also got a whole load of writing done so quite pleased with myself! 🙂

In-between all that, I’ve been reading…

Why databases are not for containers
I’ve been doing a lot of work into SQL Server running in containers recently and am keeping an eye out online for articles. Here’s one that’s in the minority…why you shouldn’t be using containers for database systems.

Understanding Docker for Windows
Continuing my docker obsession interest, here’s a youtube series on Docker for Windows, each part broken down into easily to digest sub 10 min videos.

Performance Testing: SQL Server on Linux vs Windows
Slava Murygin goes through a performance test of SQL Server running on Linux vs Windows.

Understanding Left vs. Right Partition Functions (with Diagrams)
Ever had your head slightly melted when trying to figure out partition boundaries? Kendra Little’s post has diagrams to make it (a little bit) easier.

Query Store: How it works? How to use it?
One of the new features that I haven’t played around much with (yet) is the Query Store. Here’s a technet post detailing what it is and how you can use it.

Have a good weekend!

5

Creating SQL Containers from a Dockerfile

I’ve been playing around with SQL containers on Windows Sever 2016 a lot recently and well, building empty SQL containers is all fine and dandy but it’s kinda missing the point. What containerization allows you to do is build custom images that are designed for your environment, say with a bunch of databases ready to go (for QA, dev, testing etc.) from which containers can be built in a very short amount of time.

So if you need a new SQL instance spun up for testing? Having a pre-built custom image ready will allow you to do that very rapidly and the simplest way to build a custom image is from a dockerfile.

So let’s go through the process.

This post assumes that you have the docker engine already installed on Windows Server 2016. If you haven’t set that up you can following the instructions on how to do it here.

I’m also going to be running all my powershell commands in a remote session, if you don’t know how to set that up the instructions are here.

First thing to do is verify that your docker engine is running:-

docker version

dockerversion

And that you have a vanilla SQL Server image available:-

docker images

dockerimages

If you don’t you can follow the instructions here to pull an image from the docker repository here.

Now create a directory on your server to hold your dockerfile and database files. I’m going to create C:\Docker\Demo

mkdir c:\docker\demo

Ok, your server is all good to go. What I’m going to do now is:-

  • jump onto my local instance of SQL 2016
  • create a few databases
  • shutdown my instance of SQL
  • copy the database files to my server
  • create a dockerfile to build an SQL container image with those databases available
  • build a new SQL container

Ok, so in my instance of SQL I’m going to run:-

USE [master]
GO

CREATE DATABASE [DatabaseA] ON PRIMARY 
(	NAME		= N'DatabaseA'
	,FILENAME	= N'C:\SQLServer\SQLData\DatabaseA.mdf'
	,SIZE		= 8192 KB
	,MAXSIZE	= UNLIMITED
	,FILEGROWTH = 65536 KB) 
LOG ON 
(	NAME		= N'DatabaseA_log'
	,FILENAME	= N'C:\SQLServer\SQLLog\DatabaseA_log.ldf'
	,SIZE		= 8192 KB
	,MAXSIZE	= 2048 GB
	,FILEGROWTH = 65536 KB)
GO

CREATE DATABASE [DatabaseB] ON PRIMARY
(	NAME		= N'DatabaseB'
	,FILENAME	= N'C:\SQLServer\SQLData\DatabaseB.mdf'
	,SIZE		= 8192 KB
	,MAXSIZE	= UNLIMITED
	,FILEGROWTH = 65536 KB) ,
(	NAME		= N'DatabaseB_Data'
	,FILENAME	= N'C:\SQLServer\SQLData\DatabaseB_Data.ndf'
	,SIZE		= 8192 KB
	,MAXSIZE	= UNLIMITED
	,FILEGROWTH = 65536 KB)
LOG ON 
(	NAME		= N'DatabaseB_log'
	,FILENAME	= N'C:\SQLServer\SQLLog\DatabaseB_log.ldf'
	,SIZE		= 8192 KB
	,MAXSIZE	= 2048 GB
	,FILEGROWTH = 65536 KB)
GO

CREATE DATABASE [DatabaseC] ON PRIMARY 
(	NAME		= N'DatabaseC'
	,FILENAME	= N'C:\SQLServer\SQLData\DatabaseC.mdf'
	,SIZE		= 8192 KB
	,MAXSIZE	= UNLIMITED
	,FILEGROWTH = 65536 KB) 
LOG ON 
(	NAME		= N'DatabaseC_log'
	,FILENAME	= N'C:\SQLServer\SQLLog\DatabaseC_log.ldf'
	,SIZE		= 8192 KB
	,MAXSIZE	= 2048 GB
	,FILEGROWTH = 65536 KB)
GO

Really simple code just to create three databases, one (DatabaseB) has an extra data file as I want to show how to add databases with multiple data files to a SQL container via a docker file.

Once the databases are created, shutdown the instance either through the SQL config manager or run:-

SHUTDOWN WITH NOWAIT

N.B.- This is my local dev instance! Do not run this on anything other than your own dev instance!

Next thing to do is create our dockerfile. Open up your favourite text editor (mine is Notepad++, I’ve tried others but it simply is the best imho) and drop in:-

# using vNext image
FROM microsoft/mssql-server-windows

# create directory within SQL container for database files
RUN powershell -Command (mkdir C:\\SQLServer)

#copy the database files from host to container
COPY DatabaseA.mdf C:\\SQLServer
COPY DatabaseA_log.ldf C:\\SQLServer

COPY DatabaseB.mdf C:\\SQLServer
COPY DatabaseB_Data.ndf C:\\SQLServer
COPY DatabaseB_log.ldf C:\\SQLServer

COPY DatabaseC.mdf C:\\SQLServer
COPY DatabaseC_log.ldf C:\\SQLServer

# set environment variables
ENV sa_password=Testing11@@

ENV ACCEPT_EULA=Y

ENV attach_dbs="[{'dbName':'DatabaseA','dbFiles':['C:\\SQLServer\\DatabaseA.mdf','C:\\SQLServer\\DatabaseA_log.ldf']},{'dbName':'DatabaseB','dbFiles':['C:\\SQLServer\\DatabaseB.mdf','C:\\SQLServer\\DatabaseB_Data.ndf','C:\\SQLServer\\DatabaseB_log.ldf']},{'dbName':'DatabaseC','dbFiles':['C:\\SQLServer\\DatabaseC.mdf','C:\\SQLServer\\DatabaseC_log.ldf']}]"

What this file is going to do is create a container based on the lines of code in the file and then save it as a new custom image (the intermediate container is deleted at the end of the process). Let’s go through it line by line…

FROM microsoft/mssql-server-windows
This is saying to base our image on the original image that we pulled from the docker hub.

RUN powershell -Command (mkdir C:\\SQLServer)
Within the container create a directory to store the database files

COPY DatabaseA.mdf C:\\SQLServer…
Each one of these lines copies the database files into the container

ENV sa_password=Testing11@@
Set the SQL instance’s SA password

ENV ACCEPT_EULA=Y
Accept the SQL Server licence agreement (your container won’t run without this)

ENV attach_dbs=”[{‘dbName’:’DatabaseA’,’dbFiles’:[‘C:\\SQLServer\\DatabaseA.mdf’…
And finally, attach each database to the SQL instance


Name the file dockerfile (no extension), then copy it and the database files to your server into the directory created earlier.

demodirectory

Now we can build our custom image. So in your powershell command window, navigate to the directory with the dockerfile in and run:-

docker build -t demo .

This will build a custom docker image running SQL with our databases. The -t flag will tag the image as demo and don’t forget to include the . as this tells the docker engine to look for a file in the directory called dockerfile.

Once that’s complete, verify the image has been created:-

docker images

dockercustomimage

Awesome stuff! We have our custom image. So let’s create a container from it:-

docker run -d -p 15788:1433 --name democontainer demo

This will create and run a new container based off our image with the host server’s port 15788 mapped to the port 1433 within the container. Once that’s complete, verify that the container is running:-

docker ps

dockercontainer

Haha! Cool! Also, how quick was that??

We have our container up and running. Let’s connect to it remotely via SSMS and check that the databases are there. So use the host server’s IP address and the custom port that we specified when creating the container:-

connectssmstocontainer

And then have a look in object explorer:-

verifydatabasesincontainer

And there you have it. One newly built SQL container from a custom image running our databases.

Imagine being able to spin up new instances of SQL with a full set of databases ready to go in minutes. This is main advantage that container technology gives you, no more waiting to install SQL and then restore databases. Your dev or QA person can simply run one script and off they go.

I really think this could be of significant benefit to many companies and we’re only just starting to explore what this can offer.

Thanks for reading!

1

Monday Coffee 2017-02-06

A lot’s been said about last week’s Gitlab outage so I’m not going to go over the details here but I do want to talk about one point that was when I was watching the guys fix the issue live on youtube.

A lot of the people making comments kept asking if the person who made the original mistake was going to be fired (and I mean a lot of people were asking). Finally one of the guys on the channel responded by saying that this wasn’t a personal mistake, it was a process failure and if the person who had made the original error hadn’t taken a backup before he started this work, they would not have been able to recover anything at all.

I did wonder about who was asking the question though and I came to the conclusion that it couldn’t have been anyone who’s worked in the tech industry for any sustained period time.

Show me a senior technical person who has never made a mistake causing an outage and I’ll show you a fibber

People do not get fired for making one mistake, could you imagine? Everyone that I know in tech has made at least one error causing an outage, others (like me) have made more than one. OK, yes, if this was the latest in a line of mistakes that that person had made at GitLab then maybe, but I doubt that it was.

It comes down to the age old adage, learn from your mistakes. Bet that guy at Gitlab won’t make that mistake again 🙂

Have a good (mistake free) week.

0

Friday Reading 2017-02-03

2017 is trundling along now, this week I’ve been reading…

The GitLab incident report
We’ve all had that moment of “what did I just do??” Here’s how the guys at Gitlab responded to such a moment.

Linux OS Fundamentals for the SQL Admin
I don’t know how I missed this webinar but thankfully it’s been uploaded to youtube. Anthony Nocentino (b|t) goes over linux basics for us SQL DBAs.

How six lines of code + SQL Server can bring Deep Learning to ANY App
Rimma Nehme (t) goes through the what, why & how of using ML with SQL Server.

Stopping or Postponing an upgrade to SQL Database V12
Need to stay on v11 for now? Read this.

Hyper-V & powershell
Good starting reference for anyone using Hyper-V wanting to get into powershell

Have a good weekend!

2

SQL Containers and Networking

I recently talked with the guys over at SQL Data Partners on their podcast about SQL Server and containers. It was real good fun and I enjoyed chatting with Carlos Chacon (b|t) and Steve Stedman (b|t)about container technology and my experiences with it so far. Would definitely like to go back on (if they’ll have me 🙂 )

Anyway, during the podcast one of the questions that came up was “How do containers interact with the network resources on the host server?”

To be honest, I wasn’t sure. So rather can try and give a half answer I said to the guys that I didn’t know and I’d have to come back to them.

Career Tip – when working with technology it’s always better to say you don’t know but will research and come back with an answer, than it is to try and blag your way through.

Once the podcast recording was over I started to think about it. Now there’s a bit of a clue in the code when you run a container:-

docker run -d -p 15798:1433 --name TestContainer ImageName

The -p 15798:1433 part of the code specifies which port on the host server maps to a port in the container. So there’s a NAT network in there somewhere?

I went off and did a bit of research and I found the following diagram which details how the containers interact with the host at the network layer:-

container_networks
Image source

In essence the container host’s network connectivity is extended to containers via a Hyper-V virtual switch which containers connect to via either the Host virtual NIC (this is for windows server containers) or a synthetic VM NIC (for Hyper-V containers).

The containers themselves can connect to the host network via different modes.The default is a NAT network that is created by the docker engine onto which container endpoints are automatically attached and this allows for port forwarding from the Host to the containers (which we see in the code earlier in this blog).

This can all be seen by running the following commands:-

To list the docker networks:-

docker network ls

dockernetworks
And there’s our NAT network.

To get the network adapters of a server:-

Get-NetAdapter

networkadapter
There’s the vNIC that the containers use to connect to the virtual switch (I’m running my docker engine in a VM, hence the other hyper-v NIC).

To get the virtual switches of a hyper-v host (remember some hyper-v elements are installed when the container feature is enabled):-

Get-VMSwitch

virtualswitch
And there’s the virtual switch.

So there’s how it works! Thanks for reading.