I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true
Related
I have a Dockerfile in a directory on my local computer. That directory contains files & folders that I need for my cloud server. It's my understanding that COPY will make those files and folders a part of the docker image inside a newly created /app folder. When I build the image with docker build -t app:2023.01.25 . it builds without issues. Then I tag and push it to Google Cloud's Container Registry with docker tag app:2023.01.25 gcr.io/app/app:2023.01.25 then docker push gcr.io/app/app:2023.01.2 . I then spawn a new server with Google's Python SDK with the new image. I check to verify the image was used for the server, and it appears so. So I ssh into the server and can't find any files. Am I misunderstanding docker? This is my first time using docker images.
# Dockerfile
FROM intel/oneapi-hpckit:devel-ubuntu20.04
COPY . /app
WORKDIR /app
RUN python3 -m pip install -r requirements.txt
Windows cmd:
docker build -t app:2023.01.25 .
docker tag app:2023.01.25 gcr.io/app/app:2023.01.25
docker push gcr.io/app/app:2023.01.2
I have a application running react as front end and node as back end code. In react public folder, we have a meta.json which has the version number, every time we run npm run build, it will update version number in that file. we are using this method to make sure the website always displays the new release version, in the database also we update the version number and if both doesn't match website automatically loads new version.
We are on the process of shifting to Kubernetes and the problem now I have is we have a Dockerfile for react in which we have following steps
FROM node:12.18.3 AS build
ENV CI=false
ENV WDS_SOCKET_PORT=0
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install --production
COPY . .
RUN npm run build:development
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx-custom.conf /etc/nginx/conf.d/default.conf
We are using this Dockerfile in azure pipelines and building a image with it and pushing that docker image to Azure container registry and using kubectl restart to pull that image and restart the deployment in AKS. After npm run build from the Dockerfile, my meta.json file will have updated version, I want to commit and push that changed files to azure repo, so that next time if pipeline is run it will have updated version number.
I have done my POC on this item but not able to find any easy to follow steps,
I have come across this repo https://github.com/ShadowApex/docker-git-push but not clear on how to execute this one properly, any help would be greatly appreciated.
Instead of adding the Git into the Docker, it will add extra layers to the docker image.
Once your image build is completed after that what you can do is something like copy the JSON outside of the docker image and push it from the CI machine to git or bucket where you want to manage.
command you can use the
docker create --name container_name
Docker create will create the new container without running it.
The docker container create (or shorthand: docker create) command creates a new container from the specified image, without starting it.
When creating a container, the docker daemon creates a writeable
container layer over the specified image and prepares it for running
the specified command. The container ID is then printed to STDOUT.
This is similar to docker run -d except the container is never started.
So once container filesystem there run command to copy a file from docker to CI machine simple as that.
Docker copy command
docker cp container_name:/app/build/meta.json .
Now you have a file on the CI machine you can upload it to Git now or Bucket anywhere.
I am running a docker-in-docker container that always uses the same few images.
I would like to pre-pull those in my dind container so I don't have to pull them at startup.
How would I be able to achieve this?
I was thinking of building my own dind image along the lines of
FROM docker:18.06.1-ce-dind
RUN apk update && \
apk upgrade && \
apk add bash
RUN docker pull pre-pulled-image:1.0.0
Obviously above Dockerfile will not build because docker is not running during the build, but it should give an idea of what I'd like to achieve.
You can't do this.
If you look at the docker:dind Dockerfile it contains a declaration
VOLUME /var/lib/docker
That makes it impossible to create a derived image with any different content in that directory tree. (This is the same reason you can't create a mysql or postgresql image with prepopulated data.)
Here is some way to achieve this.
The idea is to save images that you want to use, and then to import this images during the container startup process.
For example, you can use a folder images where to store your images.
And you have to use a customized entrypoint script that will import the images.
So the Dockerfile will be :
FROM docker:19.03-dind
RUN apk add --update --no-cache bash tini
COPY ./images /images
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["tini", "--", "/entrypoint.sh"]
(tini is necessary to load images in background)
And the entrypoint.sh script :
#!/bin/bash
# Turn on bash's job control
set -m
# Start docker service in background
/usr/local/bin/dockerd-entrypoint.sh &
# Wait that the docker service is up
while ! docker info; do
echo "Waiting docker..."
sleep 3
done
# Import pre-installed images
for file in /images/*.tar; do
docker load <$file
done
# Bring docker service back to foreground
fg %1
FYI, an example on how to save an image :
docker pull instrumentisto/nmap
docker save instrumentisto/nmap >images/nmap.tar
I am running a docker-in-docker container that always uses the same few images.
I would like to pre-pull those in my dind container so I don't have to pull them at startup.
How would I be able to achieve this?
I was thinking of building my own dind image along the lines of
FROM docker:18.06.1-ce-dind
RUN apk update && \
apk upgrade && \
apk add bash
RUN docker pull pre-pulled-image:1.0.0
Obviously above Dockerfile will not build because docker is not running during the build, but it should give an idea of what I'd like to achieve.
You can't do this.
If you look at the docker:dind Dockerfile it contains a declaration
VOLUME /var/lib/docker
That makes it impossible to create a derived image with any different content in that directory tree. (This is the same reason you can't create a mysql or postgresql image with prepopulated data.)
Here is some way to achieve this.
The idea is to save images that you want to use, and then to import this images during the container startup process.
For example, you can use a folder images where to store your images.
And you have to use a customized entrypoint script that will import the images.
So the Dockerfile will be :
FROM docker:19.03-dind
RUN apk add --update --no-cache bash tini
COPY ./images /images
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["tini", "--", "/entrypoint.sh"]
(tini is necessary to load images in background)
And the entrypoint.sh script :
#!/bin/bash
# Turn on bash's job control
set -m
# Start docker service in background
/usr/local/bin/dockerd-entrypoint.sh &
# Wait that the docker service is up
while ! docker info; do
echo "Waiting docker..."
sleep 3
done
# Import pre-installed images
for file in /images/*.tar; do
docker load <$file
done
# Bring docker service back to foreground
fg %1
FYI, an example on how to save an image :
docker pull instrumentisto/nmap
docker save instrumentisto/nmap >images/nmap.tar
I am using windows and have boot2docker installed. I've downloaded images from docker hub and run basic commands. BUT
How do I take an existing application sitting on my local machine (lets just say it has one file index.php, for simplicity). How do I take that and put it into a docker image and run it?
Imagine you have the following existing python2 application "hello.py" with the following content:
print "hello"
You have to do the following things to dockerize this application:
Create a folder where you'd like to store your Dockerfile in.
Create a file named "Dockerfile"
The Dockerfile consists of several parts which you have to define as described below:
Like a VM, an image has an operating system. In this example, I use ubuntu 16.04. Thus, the first part of the Dockerfile is:
FROM ubuntu:16.04
Imagine you have a fresh Ubuntu - VM, now you have to install some things to get your application working, right? This is done by the next part of the Dockerfile:
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
For Docker, you have to create a working directory now in the image. The commands that you want to execute later on to start your application will search for files (like in our case the python file) in this directory. Thus, the next part of the Dockerfile creates a directory and defines this as the working directory:
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
As a next step, you copy the content of the folder where the Dockerfile is stored in to the image. In our example, the hello.py file is copied to the directory we created in the step above.
COPY . /usr/src/app
Finally, the following line executes the command "python hello.py" in your image:
CMD [ "python", "hello.py" ]
The complete Dockerfile looks like this:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
CMD [ "python", "hello.py" ]
Save the file and build the image by typing in the terminal:
$ docker build -t hello .
This will take some time. Afterwards, check if the image "hello" how we called it in the last line has been built successfully:
$ docker images
Run the image:
docker run hello
The output shout be "hello" in the terminal.
This is a first start. When you use Docker for web applications, you have to configure ports etc.
Your index.php is not really an application. The application is your Apache or nginx or even PHP's own server.
Because Docker uses features not available in the Windows core, you are running it inside an actual virtual machine. The only purpose for that would be training or preparing images for your real server environment.
There are two main concepts you need to understand for Docker: Images and Containers.
An image is a template composed of layers. Each layer contains only the differences between the previous layer and some offline system information. Each layer is fact an image. You should always make your image from an existing base, using the FROM directive in the Dockerfile (Reference docs at time of edit. Jan Vladimir Mostert's link is now a 404).
A container is an instance of an image, that has run or is currently running. When creating a container (a.k.a. running an image), you can map an internal directory from it to the outside. If there are files in both locations, the external directory override the one inside the image, but those files are not lost. To recover them you can commit a container to an image (preferably after stopping it), then launch a new container from the new image, without mapping that directory.
You'll need to build a docker image first, using a dockerFile, you'd probably setup apache on it, tell the dockerFile to copy your index.php file into your apache and expose a port.
See http://docs.docker.com/reference/builder/
See my other question for an example of a docker file:
Switching users inside Docker image to a non-root user (this is for copying over a .war file into tomcat, similar to copying a .php file into apache)
First off, you need to choose a platform to run your application (for instance, Ubuntu). Then install all the system tools/libraries necessary to run your application. This can be achieved by Dockerfile. Then, push Dockerfile and app to git or Bitbucket. Later, you can auto-build in the docker hub from github or Bitbucket. The later part of this tutorial here has more on that. If you know the basics just fast forward it to 50:00.