How to ship python application through docker - docker

I just now started using docker.
I made a python application using python 2.7. Python code files are in my system and in bitbucket repository. I am able to run that file in my local system through eclipse.
Now, how I can run that files in my docker and distribute the python application to other users(don't want to show code).
Can help me explaining the steps in simple language

Docker is in no way a mean to hide your code

If you want to run your code in the container, you will have to copy over your code into the container. If you don't want to expose the source code, compile python and distribute binaries. Use Cython to compile python to C code, then distribute your app as python binary libraries (pyd) instead.
Here is an example:
http://blog.biicode.com/bii-internals-compiling-your-python-application-with-cython/
Do the following 3 steps on your host to copy the code to docker container:
1.Get short container id :
docker ps
2.Get full container id
docker inspect -f '{{.Id}}' SHORT_CONTAINER_ID
3.copy file :
sudo cp path-to-file-on-host /var/lib/docker/aufs/mnt/FULL_CONTAINER_ID/PATH-TO-NEW-FILE-IN-CONTAINER
The way to run the code in container should be the same as you run on your host. Maybe there are some configurations required for the ports and ip.

Related

Running attended installer inside docker windows/servercore

I've been attempting to move an application to the cloud for a while now and have most of the services set up in pods running in a k8s cluster. The last piece has been giving me trouble, I need to set up an image with an older piece of software that cannot be installed silently. I then attempted in my dockerfile to install its .net dependencies (2005.x86, 2010.x86, 2012.x86, 2015.x86, 2015.x64) and manually transfer a local install of the program but that also did not work.
Is there any way to run through a guided install in a remote windows image or be able to determine all of the file changes made by an installer in order to do them manually?
You can track the changes done by the installer following these steps:
start a new container based on your base image
docker run --name test -d <base_image>
open a shell in the new container (I am not familiar with Windows so you might have to adapt the command below)
docker exec -ti test cmd
Run whatever commands you need to run inside the container. When you are done exit from the container
Examine the changes to the container's filesystem:
docker container diff test
You can also use docker container export to export the container's filesystem as a tar archive, and then docker image import to create an image from that archive.

Dockerfile with multiple images

What I want to build, without Docker, would look like this:
AWS EC2 Debian OS where I can install:
Python 3
Anaconda
Run python scripts that will create new files as output
SSH into it so that I can explore the new files created
I am starting to use Docker an my first approach was to build everything into a Dockerfile but I don't know how to add multiple FROM so that I can install the official docker image of Debian, Python3 and Anaconda.
In addition, with this approach is it possible to get into the container and explore the files that have been created?

What is the step by step process to deploy a java app to Docker?

I've engineering background mostly with coding/dev't than deployment. We have introduced Microservices recently to our team and I am doing POC on deploying these Microservices to Docker. I made a simple application with maven, Java 8 (not OpenJdk) and jar file is ready to be deployed but I stuck with the exact steps on how to deploy and run/test the application on Docker container.
I've already downloaded Docker on mac and went over this documentation but I feel like there are some steps missing in the middle and I got confused.
I appericiate your help.
Thank you!
If you already have a built JAR file, the quickest way to try it out in docker is to create a Dockerfile which uses the official OpenJDK base image, copies in your JAR and configures Docker to run it when the container starts:
FROM openjdk:7
COPY my.jar /my.jar
CMD ["java", "-jar", "/my.jar"]
With that Dockerfile in the same location as your JAR file run:
docker build -t my-app .
Which will create the image, and then to run the app in a container:
docker run my-app
If you want to integrate Docker in your build pipeline, so the output of each build is a new image, then you can either compile the app inside the image (as in Mark O'Connor's comment above; or build the JAR outside of the image and just use Docker to package it, like in the simple example above.
The advantage of the second approach is a smaller image which just has the app without the source code. The advantage of the first is you can build your image on any machine with Docker - you don't need Java installed to build it.

is it possible to wrap an entire ubuntu 14 os in a docker image

I have a Ubuntu 14 desktop, on which I do some of my development work.
This work mainly revolves around Django & Flask development using PyCharm
I was wandering if it was possible to wrap the entire OS file system in a Docker container, so my whole development environment, including PyCharm and any other tools, would become portable
Yes, this is where Docker shines. Once you install Docker you can run:
docker run --name my-dev -it ubuntu:14.04 /bin/bash
and this will put you, as root, inside a Docker container's bash prompt. It is for all intents and purposes the entire os without anything extra, you will need to install the extras, like pycharm, flask, django, etc. Your entire environment. The environment you start with has nothing, so you will have to add things like pip (apt-get install -y python-pip), and other goodies. Once you have your entire environment you can exit (with exit, or ^D) and you will be back in your host operating system. Then you can commit :
docker commit -m 'this is my development image' my-dev my-dev
This takes the Docker image you just ran (and updated with changes) and saves it on your machine with the tag my-dev:v1, any time in the future you can run this again using the invocation:
docker run -it my-dev /bin/bash
Building a Docker image like this is harder, it is easier once you learn how to make a Dockerfile that describes the base image (ubuntu:14.04) and all of the modifications you want to make to it in a file called Dockerfile. I have an example of a Dockerfile here:
https://github.com/tacodata/pythondev
This builds my python development environment, including git, ssh keys, compilers, etc. It does have my name hardcoded in it, so, it won't help you much doing development (I need to fix that). Anyway, you can download the Dockerfile, change it with your details in it, and create your own image like this:
docker build -t my-dev -< Dockerfile
There are hundreds of examples on the Docker hub which is where I started with mine.
-g

Copy files from host to docker container then commit and push

I'm using docker in Ubuntu. During development phase I cloned all source code from Git in host, edit them in WebStorm, and them run with Node.js inside a docker container with -v /host_dev_src:/container_src so that I can test.
Then when I wanted to send them for testing: I committed the container and pushed a new version. But when I pulled and ran the image on the test machine, the source code was missing. That makes sense as in test machine there's no /host_src available.
My current workaround is to clone the source code on the test machine and run docker with -v /host_test_src:/container_src. But I'd like to know if it's possible to copy the source code directly into the container and avoid that manipulation. I'd prefer to just copy, paste and run the image file with the source code, especially since there's no Internet connection on our testing machines.
PS: Seems docker cp only supports copying file from container to host.
One solution is to have a git clone step in the Dockerfile which adds the source code into the image. During development, you can override this code with your -v argument to docker run so that you can make changes without rebuilding. When it comes to testing, you just check your changes in and build a new image. Now you have a fully standalone alone image for testing.
Note that if you have a VOLUME instruction in your Dockerfile, you will need to make sure it occurs after the git clone step.
The problem with this approach is that if you are using a compiled language, you only want your binaries to live in the final image. In this case, the git clone needs to be replaced with some code that either fetches or compiles the binaries.
Please treat your source codes are data, then package them as data container , see https://docs.docker.com/userguide/dockervolumes/
Step 1 Create app_src docker image
Put one Dockerfile inside your git repo like
FROM BUSYBOX
ADD . /container_src
VOLUME /container_src
Then you can build source image like
docker build -t app_src .
During development period, you can always use your old solution -v /host_dev_src:/container_src.
Step 2 Transfer this docker image like app image
You can transfer this app_src image to test system similar to your application image, probably via docker registry
Step 3 Run as data container
In test system, run app container above it. (I use ubuntu for demo)
docker run -d -v /container_src --name code app_src
docker run -it --volumes-from code ubuntu bash
root#dfb2bb8456fe:/# ls /container_src
Dockerfile hello.c
root#dfb2bb8456fe:/#
Hope it gives help
(give credits to https://github.com/toffer/docker-data-only-container-demo , which I get detail ideas)
Adding to Adrian's answer, I do git clone, and then do
CMD git pull && start-my-service
so the latest code at the checked out branch gets run. This is obviously not for everyone, but it works in some software release models.
You could try and have two Dockerfiles. The base one would know how to run your app from a predevined folder, but not declare it a volume. When developing you will be running this container with your host folder mounted as a volume. Another one, the package one, will inherit the base one and copy/add the files from your host directory, again without volumes, so that you would carry all the files to the tester's host.

Resources