Getting Started with Kubernetes on Windows 10 using Hyper-V and MiniKube

Today we are going to get started with Kubernetes on Windows machines running windows 10 OS. Mostly this is for all the developers like us who have windows 10 machines as their day to day uses and want to quickly get started with Kubernetes. Later it becomes easy too to understand and work with Azure (K)Container Service aka AKS.

The easiest way to get started with Kubernetes in local development environment is make use of MiniKube. MiniKube is a tool that runs a single-node Kubernetes cluster inside a VM on your local machine for users looking to try out Kubernetes or develop with it.


A development computer running:
  • Visual Studio 2017 (mine is v15.5.2)
  • Enable Hyper-V if not done already
  • [Optional] Docker for Windows. Get Docker CE for Windows (stable). After installing and starting Docker, right-click on the tray icon and select Switch to Linux containers (if already not). My current version is v17.12.0-ce-win47 (15139)
  • Install kubectl, the Kubernetes command-line tool. This is needed to manage your Kubernetes cluster once it in published on Azure. It's easy to install kubectl using Google Cloud SDK for windows.
  • A Docker hub account (to publish images)

Download and Install MiniKube

To get started let us first download Minikube and set the PATH for it. While there are new releases quite frequently, the latest version as of now is 0.24.2 and that is available for download from here. Once you download the executable just rename it to minikube.exe. Now keep it at any location as per your wish. I kept it under 'C:\Program Files (x86)\Kubernetes\Minikube\minikube.exe'. Now add this folder path as part of your PATH environment variable. Open 'System Properties' by searching 'View advanced system settings' in your machine and follow the following image to update the PATH variable. This is to make sure 'minikube' command is available in your PowerShell or CMD window by default and you actually don't need to change directory to the MiniKube installer folder ('C:\Program Files (x86)\Kubernetes\Minikube') every time.
Now quickly open up a PowerShell window and type the following command to make sure 'minikube' is installed correctly and the version is up to date.
minikube version
// output: minikube version: v0.24.1
So we are good with MiniKube VM creation now i.e. can we start MiniKube? No actually! As I said in the title we are going to use HyperV and not VirtualBox for this tutorial. It turns out that by default MiniKube uses the first HyperV virtual network it finds and for most users its generally an internal one. So MiniKube can not access internet etc. from the created Linux VM which causes further problems during our application deployment (like can not download docker images from any public registry or it simply hangs in between while creating the VM) and other issues. To overcome this we need to use/create an external network switch as described here. In this case too I'm going to create an external network switch named 'Primary Virtual Switch'.
Make sure to 'restart' your PC to get rid of any routing table caching issues after creating this virtual switch. That's all, we can now use MiniKube to it's full potential.

Start MiniKube

To create the MiniKube VM (Linux) in your Hyper-V environment, please execute the following command.
minikube start --vm-driver=hyperv --kubernetes-version="v1.8.0" --hyperv-virtual-switch="Primary Virtual Switch" --memory 4096
Here we are asking MiniKube to create a VM with 
  • 4 GB of RAM (Found that with 2 GB of default RAM it was giving many issues like some of the services not coming up with memory issue, so had to increase)
  • Hyper-V as the virtualization driver 
  • Install kubernetes version 1.8.0 inside it (you can get all version details by executing minikube get-k8s-versions command before minikube start)
  • Use newly created external virtual switch named 'Primary Virtual Switch' as the network adapter
You should see in the PowerShell window that MiniKube is downloading an ISO image from a pre-defined location and it is starting (already created) a virtual machine. Once done, you can verify that the cluster is is running mode using 'minikube status' command.
Just for fun, you can go to Hyper-V manager & connect to the newly created VM called 'minikube', the user name is 'docker' and password is 'tcuser'. And voila! you have full control over the VM using bash.

Congratz! We are now running a single-node Kubernetes cluster inside the VM. As the external network we specified was connected to my WiFi, that means my minikube VM got a new IP too and i can access services deployed inside it. You can find the details by executing the following command.
minikube status
Output should be like
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at

To double confirm use 'minikube dashboard' command, that should ideally open up the *mostly* read-only view of your deployed local Kubernetes cluster dashboard. You can find all details like the running system services, pods, deployments etc.
We can now make use of 'kubectl' commands whatever way we need. below are a few examples with explanations.
// Set current kubectl config to point to/work with local minikube cluster
kubectl config set-context minikube

// Get minikube cluster config details
kubectl config view minikube

// Just to get the master node endpoint details with IP & port used
kubectl cluster-info

// Get the full cluster information (generally export to a file because of the output size)
kubectl cluster-info dump

// Get all currently running pods across all namespaces. 
kubectl get pods --all-namespaces
Except these you can use all other commonly used commands to play with the cluster as listed in my previous article.

Create Docker Image & publish to Docker Hub

Follow my previous article to create a simple core 2.0 web api app. There we published it to Azure Container Registry but this time lets publish to Docker Hub (if you don't have an account please create one). Execute the following commands to publish the image to docker hub once the image is created (make sure to name the image properly in docker-compose.yaml file, mine is 'dsanjay/quotesgenerator:linux').
// Build the image locally
docker-compose up -d --build

// Log-into docker hub
docker login --username sanjayd --password *******

// Push the image to publicly accessible docker hub repository
docker push dsanjay/quotesgenerator:linux

Deploy App to local MiniKube Cluster

Once the cluster is up & running it's pretty simple to deploy new applications & access them. We already did that in the previous article. Below is the YAML file that we are going to provide to our MiniKube master (rather API Service) and it should take care of deploying the pods as needed (we are going to create one instance for now) and expose as service.
apiVersion: apps/v1beta1
kind: Deployment
  name: quotes
  replicas: 1
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5 
        app: quotes
      - name: quotes
        image: dsanjay/quotesgenerator:linux
        - containerPort: 80
            cpu: 250m
            cpu: 500m
apiVersion: v1
kind: Service
  name: quotes
  type: NodePort
  - port: 80
    nodePort: 30663
    app: quotes
So lets go back to PowerShell & execute this command (make sure you have changed the directory where the YAML file is located)
kubectl create -f quotes.yaml
While the service is being created you can watch the status by refreshing the Kubernetes Dashboard you opened earlier. It should become green within a few seconds once the image from docker hub is downloaded and installed and the service is started.
Once it's in green state, we are done 😊 Our app is running inside the Kubernetes Cluster on Windows 10 using HyperV and MiniKube. To verify its actually working lets browse '' (to get the IP you can use 'minikube ip' command too). This is the public IP MiniKube VM is assigned to and remember we specified in the YAML file to use port '300663'. So if all is good tou should get back some random quotes with machine name appended at the end.
Now you can play with the deployment like increasing the pod count etc, details can be found here.

Before we go, to stop the MiniKube cluster execute 'minikube stop' and to completely remove the VM use 'minikube delete' commands.

A word of caution: MiniKube is not stable enough for windows as of today. Hibernating the computer, stopping the Minikube installation, changing the network, or making other unexpected changes can cause the installation to fail.

If Minikube does not start, you'll need to delete and re-create your instance:

  • Stop minikube: minikube stop
  • Delete minikube: minikube delete
  • Remove the 'minikube' virtual machine from the Hyper-V Manager, if minikube delete command failed.
  • Delete "C:\USERS\<<yourname>>\.minikube\" if it exists
  • Restart the installation process and give the new VM a static MAC address if necessary.

Let me know if you face any issues or you have any suggestions/questions.

Getting Started with Azure Managed Kubernetes - Upgrade your application in AKS

In the previous article we deployed a core 2.0 api app in AKS (Azure Managed Kubernetes). Now lets quickly look into how to upgrade the application when there is a change in code. Please make sure you read the previous article to follow through this one.

Let's start with a code change in our previously created controller class, just altered the position of the concatenated output from the 2nd Get method, moved machine name at the end.
return Ok(ListOfQuotes.OrderBy(_ => r.Next(ListOfQuotes.Length)).Take(count).Select(x => $"{x}-{Environment.MachineName}"));
Once the controller is updated let's create a new docker image locally with a new tag (consider this as new version). Once done, you should be able to see the new image created with tag 'linuxV2'.
// Build the image
docker build -t .

// Validate the new image is created
docker images

Now let's publish the same in our previously created Azure Container Registry (ACR). Make sure you are authenticated.
// Authenticate to ACR, you may need az login before this if cached credentials are cleared by now
az acr login --name sanjaysrepo

// Push new image to ACR
docker push

// Validate we have both the images (linux + linuxV2) in ACR
az acr repository show-tags --name sanjaysrepo --repository quotesgenerator --output table

Cool! It's time to update our Kubernetes cluster deployment with this new image. Before we start validate that you still have a deployment with 5 pods (spread across 2 VMs). To ensure maximum up-time, multiple instances of the application pod must be running, else as you can guess your app will go into offline more as the only one pod is getting upgraded. Scale-up your pods (as you saw in the previous article) if you don't have more than one instance of the service running.
Now to upgrade your application please execute the following command. Also while the pods are being updated you can check the status of the pods by 'kubectl get pods' command. You can see how the pods restart themselves. At the end after few moments all pods will be up and running again with latest image.
// Upgrade image in all the pods of a deployment
kubectl set image deployment quotes

// Monitor the pods
kubectl get pods

While this was in progress quickly fire up fiddler and hit the api url as before 'http://your-public-ip/api/quotes/12', do it multiple times as quickly as possible, you can actually see that a few nodes that are not upgraded are still returning old formatted values, and a few nodes that are already upgraded are returning newly formatted values. This is so cool!
All done! We just successfully updated our code in the existing/running AKS cluster.

Getting Started with Azure Managed Kubernetes - Deploy your first core 2.0 app in AKS

Well Microsoft released AKS aka *new* Azure Container Service aka Azure Kubernetes Service (fully managed) a few months back, still in public preview though. This guide will help you to get started with AKS. We are going to deploy our first core 2.0 App in AKS and then do some cool stuff like scaling up & down.
Following are the steps we are going to perform.
  • Create a docker image (in linux) from a simple core 2.0 app
  • Publish the same in Azure Container Registry (ACR)
  • Create a Kubernetes cluster using AKS in Azure
  • Deploy our application in the cluster
  • Scale up/down
  • [Optional] Create a public/private SSH key


A development computer running:
  • Visual Studio 2017 (mine is v15.5.2)
  • Docker for Windows. Get Docker CE for Windows (stable). After installing and starting Docker, right-click on the tray icon and select Switch to Linux containers (if already not). My current version is v17.12.0-ce-win47 (15139)
  • An Azure subscription
  • Google Cloud SDK for windows. Not mandatory, we can use PowerShell or Bash etc. But let's just use this today.
  • Install kubectl, the Kubernetes command-line tool. This is needed to manage your Kubernetes cluster once it in published on Azure.
  • [Optional] Enable Bash for Windows.

Create a docker image (in Linux) from a simple core 2.0 app

We are not going to go into details about all the steps here. You can read more if you want to learn about creating an image in a previous article. Below is my api controller file and the docker file. You can get these files from github too. Just take a note of the Linux image tags in the docker file to be downloaded from docker hub. We are using latest Linux images here.
    public class QuotesController : Controller
        private static readonly string[] ListOfQuotes;

        static QuotesController()
            ListOfQuotes = JsonConvert.DeserializeObject<QuoteList>(System.IO.File.ReadAllText("quotes.json")).Quotes;

        public IActionResult Get() => Ok(ListOfQuotes[new Random().Next(ListOfQuotes.Length)]);

        public IActionResult GetMany(int count)
            if (count < 0 || count > ListOfQuotes.Length)
                return BadRequest($"number of quotes must be between 0 and {ListOfQuotes.Length}");

            var r = new Random();
            return Ok(ListOfQuotes.OrderBy(_ => r.Next(ListOfQuotes.Length)).Take(count).Select(x => $"{Environment.MachineName}-{x}"));

        private class QuoteList
            public string[] Quotes { get; set;  }
FROM microsoft/aspnetcore-build:2.0.5-2.1.4-jessie AS build-env
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o out

FROM microsoft/aspnetcore:2.0.5-jessie
COPY --from=build-env /app/out .

ENTRYPOINT ["dotnet", "DockerWebTestApp.dll"]
Now open Google Cloud SDK Shell (for rest of this article we shall be using this only, you can use Bash for Windows, Git Bash, PowerShell or even a CMD shell too). Change directory to your web project's base folder (where the dockerfile is also located). Now create the docker image by executing the following 'docker build' command. You can take a guess why we are choosing a name like ''. Well you guessed it right. We are going to create an Azure Container Repository named 'sanjaysrepo' to push this image in the next step. Also the tag 'linux' is just to identify it. You can leave it blank & default 'latest' tag will be used in that case.
docker build -t .
You should be able to see the locally created image now

Publish the image in Azure Container Registry (ACR)

Now we can publish this image (which is still in your local dev box) to some container registry like Docker Hub or Google Cloud or ACR. For this article let's choose ACR. In your open Google cloud shell (or any other) execute the following commands one after another to publish the image to Azure. All these steps are self explanatory.
// Login into your azure subscription
az login 

// Change to proper subscription if you have more than one
az account set --subscription <<subscription id>> 

// Create a resource group called dockerrepo. You can use any existing too as needed
az group create --name dockerrepo --location eastus 

// Create an ACR called 'sanjaysrepo' under the RG
az acr create --resource-group dockerrepo --name sanjaysrepo --sku Basic --admin-enabled true 

// Login into the ACR instance once created
az acr login --name sanjaysrepo

// Push the locally created image to ACR
docker push

Create a Kubernetes cluster using AKS in Azure

Let's create a Kubernetes cluster of two VMs in Azure. Please execute the following steps to provision the cluster. 
// Enabling AKS preview for your Azure subscription if not done already. Don't worry about the warning after execution, it's actually done already!
az provider register -n Microsoft.ContainerService

// Create a RG in East US location (many of the azure DCs don't support AKS fully till now, EUS is still the most reliable :))
az group create --name kubernetes --location eastus

// Create the AKS cluster with 2 VM's in the same RG created above. Half of your job is done here :)
az aks create --resource-group kubernetes --name quotesservice --node-count 2 --generate-ssh-keys
The last command is going to take some time obviously :) At the end it should show you a few important details about the cluster as well as the generated SSH keys (with location), The AD App created to generate service principle etc. You should see something like below (click on the images for details).
Congratz, your first AKS linux cluster is deployed. Now lets see how we can communicate with it from my dev machine.

Deploy our application in the cluster

Before we dig into deploying our application, let's take a moment to first understand the deployed cluster & how can we as developers or release managers can communicate with the cluster.
If you now open up your Azure portal you will actually see two resource groups created for you. One is 'kubernetes' as you asked for. But there is another one automatically created by Microsoft called 'MC_kubernetes_quotesservice_eastus' for you. The naming conversion used my Microsoft for the 2nd one seems pretty straight forward. MC most probably stands for 'Managed Cluster', and then your specified RG name, cluster name and at last location to make it unique. If you open up the first RG you will see something like below. It has exactly one resource and it is called 'Container service managed'. This is actually the pointer to the master nodes for Kubernetes, totally managed by Microsoft (like patching, upgrade etc.). You don't have access to the actual Kubernetes control plane or cluster or the master nodes. 

But if you open up the 2nd RG created by Microsoft you should see lots of resources created, like the vnet, node VMs, virtual network interfaces per node, route tables, availability sets, NSG etc. These are the resources you have fully access to. Kubernetes clusters are mainly managed by a command line tool called 'kubectl', that you already installed as a perquisite if you are following till now. So we are going to use this tool to deploy/manager applications/services to these nodes. You also need to understand a bit about YAML file that Kubernetes uses to deploy applications/services. You can read more here.
To make sure 'kubectl' can communicate to our newly created Azure AKS cluster, lets execute the following command that ideally will configure 'kubectl' to communicate to your cluster securely. You should see an output like 'Merged "quotesservice" as current context in C:\Users\sanjayd\.kube\config'.
az aks get-credentials --resource-group kubernetes --name quotesservice
Once this is done, we can now execute any 'kubectl' command to validate the cluster. A few examples below.
Also a few commonly used commands are listed below.
// get all the nodes
kubectl get nodes 
// get cluster component statuses
kubectl get cs 
// get all services
kubectl get svc 
// get all pods
kubectl get pods 
// get all configs
kubectl config view 
// get all latest events
kubectl get events 
// get all deployments
kubectl get deployments 
// get logs from a pod
kubectl logs your-pod-name 
// get details about a pod
kubectl describe pod your-pod-name 
// get details about a node VM
kubectl describe node your-node-name 
// get overall cluster info
kubectl cluster-info 
Now as we are good to talk to out nodes, let's see how we can deploy our docker image in the cluster. As we saw earlier 'cubectl' generally uses YAML file to deploy one or more resources inside a cluster. So lets first create one. In the same solution (or any place you want) add a 'quotes.yaml' file and replace the content below.
apiVersion: apps/v1beta1
kind: Deployment
  name: quotes
  replicas: 1
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5 
        app: quotes
      - name: quotes
        - containerPort: 80
            cpu: 250m
            cpu: 500m
      - name: quotesregistrykey
apiVersion: v1
kind: Service
  name: quotes
  type: LoadBalancer
  - port: 80
    app: quotes
While you can read more about how this works, in short we are creating a template to create a single replica deployment and then a service from that deployment. That deployment we are going to create is going to use an image located at ''. Now to access the image AKS needs a credential. Here we are providing the name of the credential as 'quotesregistrykey'. To create the named credential execute the below command after fetching your ACR's user credentials.
kubectl create secret docker-registry quotesregistrykey --docker-server= --docker-username=your-ACR-registry-user-name --docker-password=your-ACR-registry-user-password
Once this is done we are good to deploy our application in the cluster. So lets go back to shell & execute this command
kubectl create -f quotes.yaml
While the service is being created you can watch the status by executing the below command. Wait until a public IP is added in the <EXTERNAL_IP> column (press Ctrl+C to get out of the waiting mode).
kubectl get service quotes --watch

So ideally now Azure is creating two new resources for you, one is a load balancer and another one is a public IP to expose your service over port 80 (specified in the YAML file).
 Once you see that an external IP is assigned, we are done :). Go ahead and try to browse/fiddler the url 'http://<<public-ip>>/api/quotes/12', you should see output with a 200 status.
So we just deployed an instance of our app and its running a single replica as of now (though we have created 2 nodes for our service). If you execute the url from fiddler multiple times you will always see only a single machine name in the output in all the 5 requests that we did below.
So now let's scale up it a bit.

Scale up/down

Easiest way to scale up the instances is to execute the below command
// create 5 replicas of the app/pods
kubectl scale deployment quotes --replicas=5 
// output: deployment "quotes" scaled
Now you can check the deployment & see all 5 instances are up & running (after a few moments)
To verify just go back to fiddler and fetch the same url quickly multiple times. Ideally you should see different machine names now.
Now just for curiosity you can actually use 'kubectl describe pod pod-name' against all 5 pods that we just created & check their distribution among the 2 VM nodes.
Cool! We are almost done. Last few commands for daily uses.
// scale up/down a deployment to create 2 replicas
kubectl scale deployment your-deployment-name --replicas=2 

// delete a deployment
kubectl delete deployments/your-deployment-name 

// delete a service
kubectl delete services/your-service-name 

// increase/scale the actual VM counts
az aks scale --resource-group=kubernetes --name=quotesservice --node-count 3 

// deletes the AKS deployment altogether
az aks delete --name quotesservice --resource-group kubernetes --no-wait

[Optional] Create a public/private SSH key

If you remember we used '--generate-ssh-keys' param as part of 'az aks create' command. To use your own/previously created SSH, you can make use of '--ssh-key-value' param. Here we shall see how easily we can create an SSH key by using Bash for Windows. So first fire up Bash for Windows and navigate to root directory then simply execute the following command. You can provide a path and secret when prompted else just keep pressing enter.
ssh-keygen -t rsa -C ""

Hope we have learnt something about AKS (and a few other stuff :)) today. Let me know if you face any issues or you have any suggestions/questions.

Quick tip before we close. To manage the cluster using a web UP use the following command. This command essentially creates a proxy between your dev machine and the Kubernetes API, and opens a web browser to the Kubernetes Dashboard.
az aks browse --resource-group kubernetes --name quotesservice

Create your first ( core 2.0) Service Fabric container application on Windows

So it all started when I wanted to host a very basic core api app on Azure. i had many options like hosting as app service or hosting inside service fabric or hosting inside service fabric using containers. The third option is something I wanted to explore more as I had almost no knowledge on this topic except what an image & what a container means. So i started reading & reading. Went through a few videos on pluralsight, mainly this one. I was at last able to successfully able to create a docker image on my local and deploy locally (more on that in some other post). So only thing left was to deploy & test the same on Azure. But then I spend hrs. to figure out the process. Though many things are documented in msdn but no where it was end to end. So I thought of writing it down in detail step by step on how to create an image till the point of hosting in Service Fabric as a containerized service. Please bare with me as I paste a few images along with. Code will be available below.

Note: All the steps that are described here are true as per today's releases. If things are updated later I'll try to update this post too :)


A development computer running:
  • Visual Studio 2017 (mine is v15.5.2)
  • Service Fabric SDK and tools. My current version is SDK: and Runtime:
  • Docker for Windows. Get Docker CE for Windows (stable). After installing and starting Docker, right-click on the tray icon and select Switch to Windows containers. This is required to run Docker images based on Windows. My current version is v17.12.0-ce-win47 (15139)
  • A basic Windows Service Fabric cluster with one node (for testing purpose only) running on Windows Server 2016 with Containers. Make sure you expose port 8080 for this activity - Details here
  • A registry in Azure Container Registry - Details here

Create a new core 2.0 api app

Not going into details, you can find many articles to start with core 2.0. But at the end its a basic api app. below is the solution structure.
Typical core 2.0 solution structure
Typical core 2.0 solution structure
The controller has a basic get method to get top n random quotes/lines, nothing special. I copied it from this blog post.

Create a docker image (locally)

Here comes the interesting part when you need to create a docker image (based on windows). So lets first create a docker file inside the solution. Add a new file called DockerFile in the project. And add below lines. Shall explain in details.

FROM microsoft/aspnetcore-build:2.0.5-2.1.4-nanoserver-sac2016 AS build-env
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o out

FROM microsoft/aspnetcore:2.0.5-nanoserver-sac2016
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "DockerWebTestApp.dll"]

There are many ways to build an core api app & build an image, e.g. you can build in your own machine & then containerize it from the published location. Or you can actually build & publish inside a container & then create an image out of it. I'm taking the 2nd route here.

Let's explore whats the 'BUILD PHASE' in the docker file. Let me write the 6 lines under the build phase in plain English.
  1. Get an image named 'microsoft/aspnetcore-build' created by Microsoft from that has tag as '2.0.5-2.1.4-nanoserver-sac2016' and give the container a name as 'build-env'. This image comes with all .net sdk installed and it makes sure we can build .net projects inside the container.
  2. Go inside the container & create a directory called 'app' and make it as current working directory
  3. Copy local machine's (where your docker file is residing) csproj file to the above working directory
  4. Ask .net sdk inside the container to restore any necessary nugets needed for the csproj
  5. Now after restore copy all rest files from the visual studio project (like the controller) to the working directory inside the container
  6. Now as we have all necessary project files inside the build container, execute/run a publish command to compile & publish the csproj

Hope this makes sense now :)

So same for 'RUN PHASE'.
  1. Get an image named 'microsoft/aspnetcore' created by Microsoft from that has tag as '2.0.5-nanoserver-sac2016'. To run the app I only need the .net runtime.
  2. Go inside the container & create a directory called 'app' and make it as current working directory
  3. Copy all published artifacts from the 'build-env' container we prepared before to current directory
  4. Set the startup path for the api app.
A few things to notice...
Docker will automatically keep/remove wanted/unwanted containers as it keeps executing the steps above when you execute the docker build command. That's mostly for performance & time saving management perspective. If you notice I didn't copy all files first from my machine to the build container before nuget restore. I just copied the csproj first. That's also to make sure next builds (as we modify actual controller code) are fast enough. You can read more on how to split these docker steps to minimize build time etc. Also most important point is to choose right image to download from docker hub. The tag matters a lot. E.g. now i want to host this api app inside service fabric service. So we have to know that the windows server 2016 VMs (with containers) that service fabric creates by default still not upgraded to fall creator's update, so we can not use latest tag available in docker hub which is (as of today) '2.0.5-2.1.4-nanoserver-1709'. We must use tag '2.0.5-2.1.4-nanoserver-sac2016'.

So now you have the docker file you can build an image via docker build from powershell (change directory to the location where the dockerfile is created). Give it a name like 'dsanjay/quotesgenerator' and optional tag like 'latest'. There is a reason why I choose 'dsanjay/quotesgenerator' as name. It will be helpful while publishing to docker hub. You can find more details here.

docker build -t dsanjay/quotesgenerator:latest .

Once done you can execute docker images command from powershell to check that the image is created. 

docker images command
Now you have the image in your local machine. So you need to publish it to some place from where Azure Service Fabric can download or rather anyone else can download and use your app. So either you can publish to public docker hub repository or you can publish to Azure Container repository. Follow the following steps to publish to your repository. 

docker login --username sanjayd --password ************
docker push dsanjay/quotesgenerator:latest

Once uploaded you now have the image in docker hub. We can use this image to deploy our Service Fabric service. Let's do that now.

Create the containerized service in Visual Studio

The Service Fabric SDK and tools provide a service template to help you create a containerized application.
  • Start Visual Studio. Select File > New > Project.
  • Select Service Fabric application, name it "SFContainerTestApp", and click OK.
  • Select Container from the list of service templates.
  • In Image Name enter "", the image you pushed to your container repository.
  • Give your service a name say 'QuotesService', and click OK.

Configure Communication of the Service Fabric Service

In 'ServiceManifest.xml' file expose expose 8080 as the public port for the api app you are going to publish.
<Endpoint Name="QuotesServiceTypeEndpoint" UriScheme="http" Port="8080" Protocol="http" />

Configure container port-to-host port mapping and container-to-container discovery

In 'ApplicationManifest.xml' add a port binding between the SF Service and the container hosted inside it.

<PortBinding ContainerPort="80" EndpointRef="QuotesServiceTypeEndpoint"/>

Also just for clarity make sure the isolation mode is defined as 'process' in the policies. This is the default value.

 <servicemanifestref servicemanifestname="QuotesServicePkg" servicemanifestversion="1.0.0">
      <containerhostpolicies codepackageref="Code" isolation="process">
        <portbinding containerport="80" endpointref="QuotesServiceTypeEndpoint">

Deploy the container application

Save all your changes and build the application. To publish your application, right-click on SFContainerTestApp in Solution Explorer and select Publish.

In Connection Endpoint, enter the management endpoint for the cluster you created earlier. For example, "". You can find the client connection endpoint in the Overview blade for your cluster in the Azure portal.

Click Publish.

Now you can monitor your service fabric explorer to check the health status of the app you deployed. It will be in error state for some moment until SF downloads the image from docker hub & installs in the node & stars the same. Once done you can happily browse to get a few quotes.

Let me know if you face any issues.

Also you can find more details in msdn.

Getting Started With Docker For Windows - Containerize a C# Console App

I had some free time for past couple of weeks, so wanted to go deep into docker (rather containerization). While browsing various blogs and videos though of writing the very basic steps to get started with docker in windows.


A development computer running:
  • Visual Studio (mine is v15.5.2)
  • Docker for Windows. Get Docker CE for Windows (stable). My current version is v17.12.0-ce-win47 (15139)
So what is Docker? While you must read through this article, for developers Docker is primarily a platform/manager to automate the deployment of your application inside a containerized environment. The main target of Docker is to create portable, self-sufficient containers from any application (think of a node.js app or core app or a windows service kind of app or a python app, anything you can think of). In this example we shall be working to build a Docker image from a .net core console App and deploy it in your development machine and run it. Also I shall be using docker for Windows, but same can be done using Linux environment too. 

Just before you start you should also want to have a look at this article (and/or this), which articulates the difference between VMs and Containers. 

Once your installation is done, you can run the 'docker info' command from Power Shell or Command prompt to verify the version (+ a few other important details) of docker running in your machine.
Make sure you are running 'Windows Containers'. Right click on the Docker icon in task bar and check if it says 'Switch to Linux containers...', means that you are using windows.
Let's create a basic .net core console app in Visual Studio that will print random characters in console output constantly.
Now we have a .net core console app that we can build and run in local development machine. Press F5 and you can see the output of the program. Now next is to publish this awesome app. Pretty easy, just run 'dotnet publish' command in PS/CMD (inside the project folder) and you package shall be ready in no time. Just to make sure all is good, you can fire up a PS/CMD prompt inside the publish folder and execute 'dotnet DockerConsoleTestApp.dll' to make sure you app is working fine.

Note: I could have created this package inside a container too, but for now to simplify i choose to create it in my development machine only. You can read more on how you can use docker as a development platform, not only a deployment platform.

Now lets create an image for this world class app using docker so that later I can package it inside one or more containers & run it almost anywhere without thinking of any further dependency on environment. Let's add a docker file in our solution and paste the content below. To read more about this file look here. Shall explain in details about all the lines mentioned in that file.

FROM microsoft/dotnet:2.0.4-runtime-nanoserver-1709 AS base

COPY /bin/Debug/netcoreapp2.0/publish/ .

ENTRYPOINT ["dotnet", "DockerConsoleTestApp.dll"]
Let's understand the lines written in the docker file. To start with I need a runtime to run a .net app, right? So how do I first build an image that will have .net runtime so that I can later port my code inside that image to make it executable. Thankfully you don't need to create your own. Microsoft (and all other technology providers already created these image for you and you just need to download from a proper repository, in this case that's Turns out that if you go here, Microsoft has an image already made for you with .net runtime inside it. Let's use the same then.
FROM microsoft/dotnet:2.0.4-runtime-nanoserver-1709 AS base
The above line tells docker, go get an image from docker hub named 'microsoft/dotnet' and with tag as '2.0.4-runtime-nanoserver-1709' and create a container out of it. Also name it as base (naming this is not mandatory in this case though, you will see this in future post). Now how to choose a tag is important but for now lets skip that and choose the latest one available for .net core 2.0.
This line tells docker to create another container (just remember most of the steps you write in a docker file actually creates a different different containers as needed) on top of earlier container from previous step and create a folder called 'app' inside the new container's base path (for windows it's 'C:\' drive) and make sure 'app' is the current directory inside the container. Docker may or may not delete the previous container depending on various dependencies and to boost performance on next deployments.
COPY /bin/Debug/netcoreapp2.0/publish/ .
Now the above line tells docker to copy all files from our publish folder inside the current container's working directory, which is still 'C:\app'. We are using relative path as the docker file is residing in the same folder along with the csproj file of the console app.
ENTRYPOINT ["dotnet", "DockerConsoleTestApp.dll"]
The last line tells docker to execute a command inside the latest created container. The command is, well without any surprise 'dotnet DockerConsoleTestApp.dll' as you executed earlier to verify your published app.
That's it, we are ready to see this in work. So fire up a PS/CMD inside your project directory (where the dockerfile also resides) and run 'docker build -t alphaimage'. This tells docker to execute the docker file commands one by one (as explained earlier) and create/build an image named as 'alphaimage' (name has to be all lowercase). You can optionally add a tag too like 'docker build -t alphaimage:v1'. If nothing is provided as tag, 'latest' is considered. So at the end of execution you will see how all the steps described above are performed by docker engine and it creates the image you wanted. remember if the base dotnet image is not there in your local registry, docker will first download the same from docker hub. But later when you modify your code and run the same command again to build new image, docker will intelligently skip all downloaded images/sections.
To make sure you have the image ready, run 'docker images -a' and check the output. You will find a few intermediate images too that were used to build the final image.
Now you have the image that contains the super critical app you just build. So let's run it inside a container and validate. Execute the 'docker run --name alphacontainer alphaimage:latest' command from PS/CMD and voila, you can see your app's output. Press Ctrl+C to stop the execution.
So what just happened with the command. We asked docker to create a container called 'alphacontainer' from the image called 'alphaimage' that has tag as 'latest'. Now if you remember the image already knows about the start up path, so after creating the container docker automatically started the app. For fun lets run the command again with a different container name, lets say ''docker run --name alphacontainer2 alphaimage:latest''. So now we have two containers running the same app using my development machine's OS as its base.
to get all running container details execute 'docker ps -a' and you should see below. You can see both the containers are still running (the app). When we pressed Ctrl+C it only stopped to display the output in the PS window.
Now imagine you had an app that is listening to Azure Service Bus and processing messages constantly. Just how easy to scale up the app with docker. This is just the starting point, scale up and down is a completely separate topic though :)
Now all good and I can see the app is still running. Let's examine a bit inside the container. To get details about it, you can run 'docker inspect alphacontainer' and it should provide you some important details like the start up point, networking etc. Also to check whats inside the container like how the app is being used inside) you can always fire up a CMD inside the container to inspect the folder structure etc. This can be done as the base dotnet image Microsoft provides comes with the command prompt installed. To run CMD inside the container please execute this 'docker exec -it alphacontainer cmd'. This tells docker to fire up CMD inside the container (remember our app is still running inside it). You should now see a black CMD window running inside the container and by default it should have the working directory as 'app' as we specified earlier in the docker file. You can check all the files that got copied by executing 'dir' command. You can even go back to 'C:\' drive & validate the container's file system.
Now you can play with the running container, you can stop the container, start it again (in interactive mode or not with -i parameter) and when done remove it. You may want to remove the image too if not needed anymore.
# you must stop before removing a container
docker stop alphacontainer2
docker rm alphacontainer2
docker stop alphacontainer
docker start -i alphacontainer
docker stop alphacontainer
docker rm alphacontainer

#you must remove all containers before removing an image
docker rmi alphaimage

Now just for a few more fun, I'm going to remove the infinite loop in the console app and re-package the app and follow the same method to create the same image.
static void Main(string[] args)
            var i = 0;
            while (i < 10)

Now to run it in interactive mode without me naming a container we can execute 'docker run -it alphaimage' command. Docker will create a container with a random name and execute the same. Remember now my program s not running constantly, it will exit after a certain time. So when you check the status of the newly created container, you will see its already exited (provided the while loop is complete by then :)), not like previous case where the containers were always in running state until we stop it. You can actually add a -rm flag along with the above command that will make sure Docker removes the container it created after the execution is complete, 'docker run -rm -it alphaimage'.

Hope you enjoyed the article. Let me know any feedback you have.