I am trying to setup a docker image for an mvc5 website to deploy to my service fabric Windows server 2016 with containers based cluster.
It seems that every image with IIS configured is based on a different windows build than 14393, and when I deploy those to service fabric they fail to start on my windows servers.
Am I missing something here? Does it matter what server the dockerfile runs on? So far is seems impossible to get a simple site up and running in a docker container on my service fabric cluster. I spent over a day with microsoft/windowsservercore and it just won't work, and there seems to be no way to enable failed request tracing on it because attempting to install Web-Server with all submodules fails.
If you go to docker registry, find the image, and navigate to the TAGS tab, you can find all image versions and the respective build.
For ASPNET MVC, the image microsoft/aspnet with tag 4.7.1-windowsservercore-10.0.14393.1884 is probably the one you need.
For IIS image, the image microsoft/iis with tag windowsservercore-10.0.14393.1944 might be suitable for you, you might have to add the missing packages for your application.
The problem is likely you trying to use the latest image, that won't be compatible. In your docker image, when you create the docker file,
Instead of using FROM microsoft/aspnet
you should use FROM microsoft/aspnet:4.7.1-windowsservercore-10.0.14393.1884
with the image tag after the name, otherwise you will use the latest version, that is not always compatible and should be avoided
Related
I'm new to docker and have been dabbling with it for the past few days. I've managed to successfully use docker-compose for a multi-container deployment involving an app server (flask + gunicorn) and web server (nginx).
Now, I'd like to recreate the deployment on an offline machine. After doing research, it seems that most have mentioned use docker save and docker load to transfer over the base images. However, I'm wondering whether its possible to recreate the deployment from the image created by docker-compose build? Reason being I would like to skip the entire process of wheeling my python package dependencies for offline use, which I would have to do for the method starting from the base images.
I've tried to save that particular image (output of docker-compose build) and load it on the offline machine, and then tried docker run and docker-compose up but both don't seem to work. Would like to check with the community whether this method is even possible, and if so what's the right way to go about it?
Thanks!
To solve my issue, I ended up making an image of each individual container post pip install, then using docker-compose.yml simply to spin them up. As David mentioned, it doesn't seem possible to spin up the container from the single image output by docker-compose build.
We are using Proget as our Docker Repository of choice and we are running into size issues rapidly. There isn't a good mechanism to prune pre-releases or old images that are no longer needed like there is for other artifacts.
I am using Docker.Dotnet nuget library and have a process where I can connect to a Docker API, evaluate the images using their tags and label or purge what has aged out.
The issue I am running into is that I cannot find the Docker API URL:Port anywhere. My current setup is myrepo.com/docker (and is what I have registered locally), but I cannot connect my Docker client to it.
We are planning on migrating away from this repository anyhow, but this would apply for any other docker repository of choice. For example, what is the URL of the docker hub for API interaction?
Did you look at the ProGet Retention Rules? There's a section on Container rules that seems to already purge what you want...
I'm trying to setup the deployment of docker images to Linux server (Debian 10).
I looked over the internet to find an easy solution to deploy images from docker repository onto a server automatically.
I know that Docker Hub has webhooks.
Also, there is an option to use Kubernetes, but it seems to be a bit too much for a simple application running on one server.
What I am looking for is a way for server to detect that docker image has been updated, so that it downloads it and runs the newest version.
Currently, I have setup automatic build of docker images on Azure DevOps that are pushed to private repository on Docker Hub (I will most likely move to privately hosted Nexus repository).
I am looking for suggestions on how to do it with relatively low complexity (e.g. should I use docker-compose for it or some sort of bash script on a server).
The closest thing to what I am looking for is this solution: How to auto deploy Docker Image on own server with GitLab?
I would like to know if this is the recommended way to do or are there any other, possibly easier ways to approach it.
I found this project that looks good as a solution for my case.
https://containrrr.github.io/watchtower/
I have made a quite simple golang server and I need to deploy it to a digitalocean droplet.
I know that there can be issues with cross-building go apps in case they use cgo, so to not to think about it in future I decided to use docker, so my app will be build and run always in same environment.
The first thing I dont get is about developing an app. When I create a Dockerfile I use commands to add files from my project directory into newly created docker image. Then I run the container created from this image. But what if I edit my code? - as I understood I must stop the container, remove an image and then build it again. This is a bit tricky for such a common situation - or am I doing things wrong?
Second one - I have created a docker droplet on a DO: Whats the way to deploy my app?
I have to push my image to any docker repository and pull it on to the droplet?
Or I can upload it directly?
Or I have to scp my source code to droplet and run same process as on my local machine, building image and then runnjng a container?
But what if I edit my code? - as I understood I must stop the container, remove an image and then build it again. This is a bit tricky for such a common situation - or am I doing things wrong?
Don't delete the image just rebuild it. It will be much faster than the first initial build. Also why is it tricky? It's just one or two commands, you can create a bash or .bat script if it gets annoying.
I have created a docker droplet on a DO: Whats the way to deploy my app?
All three options are a possibility. For the second one you would have to set up your VM as a docker-hub repo which might be more than you need. Using docker hub isn't bad. You could also just build the image on your server. I recommend using docker hub for it's ease and having watchtower set up on your server to restart your web app on new image pushes.
Edit: the above advice was for a VM not a docker droplet. I'm not familiar with DO but this article should help:
https://blog.machinebox.io/deploy-machine-box-in-digital-ocean-385265fbeafd
I am new to docker.io and not sure if this is beyond the scope of docker. I have an existing CentOS 6.5 system. I am trying to figure out how to create a docker image from a CentOS Linux system I already have running. I would like to basically clone this existing system; so I can port it to another cloud provider. I was able to create a docker image from a base CentOS image but I want to basically clone my existing system and use docker.io going forward.
Am I stuck with creating a base CentOS from scratch and configure it for docker from there? This might be more of a VirtualBox/Vagrant thing, but am interested in docker.io.
Looks like I need to start with base CentOS and create a Dockerfile with all the addons I need... Think I'm getting there now....
Cloning a system that is up and running is certainly not what Docker is intended for. Instead, Docker is meant to develop your OS and server installation together with the app or service, making DevOps even more DevOpsy. By starting with a clean CentOS image, you will be sure to install only what you need for the service, and have everything under control. You actually don't want all the other stuff that might produce incompatibilities. So, the answer here is that you definitely should approach the problem here the other way around.