how to continuosly change code? - docker

I am using docker for a nodejs application and I have been able to build the image from an existing nodesource image, I've installed npm, and the source code all copied into supposedly /usr/src/app (which I can't get see, I'm guessing because it's in the image/container). However, I launch the container of the image, mapped it to a port and it successfully ran, but how am I able to just connect with this via an editor and change files? This website is in development and I would like to make changes to it. I've been searching but thoroughly confused.
Here is the node image I've built from
https://hub.docker.com/r/nodesource/trusty/
Also, my container information
d9fe10b0f645 rokes/0.4 "npm start" 10 hours ago Up 10 hours 0.0.0.0:49160->8080/tcp evil_hamilton
Would I need to somehow use a volume?
Here is my dockerfile
FROM nodesource/trusty:latest
ADD package.json package.json
RUN npm install
ADD . .
CMD ["npm", "start"]

Just mount the directory containing your code as a VOLUME.
just add in your Dockerfile :
VOLUME /path/to/code
and then when running your container, use the -v option :
docker run -d -v /dir/containing/your/code:/path/to/code your_image
You can now edit your code on the fly and directly see the changes without having to rebuild and restart your image/container.

Related

commit version number in meta.json to git repo when building docker image

I have a application running react as front end and node as back end code. In react public folder, we have a meta.json which has the version number, every time we run npm run build, it will update version number in that file. we are using this method to make sure the website always displays the new release version, in the database also we update the version number and if both doesn't match website automatically loads new version.
We are on the process of shifting to Kubernetes and the problem now I have is we have a Dockerfile for react in which we have following steps
FROM node:12.18.3 AS build
ENV CI=false
ENV WDS_SOCKET_PORT=0
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install --production
COPY . .
RUN npm run build:development
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx-custom.conf /etc/nginx/conf.d/default.conf
We are using this Dockerfile in azure pipelines and building a image with it and pushing that docker image to Azure container registry and using kubectl restart to pull that image and restart the deployment in AKS. After npm run build from the Dockerfile, my meta.json file will have updated version, I want to commit and push that changed files to azure repo, so that next time if pipeline is run it will have updated version number.
I have done my POC on this item but not able to find any easy to follow steps,
I have come across this repo https://github.com/ShadowApex/docker-git-push but not clear on how to execute this one properly, any help would be greatly appreciated.
Instead of adding the Git into the Docker, it will add extra layers to the docker image.
Once your image build is completed after that what you can do is something like copy the JSON outside of the docker image and push it from the CI machine to git or bucket where you want to manage.
command you can use the
docker create --name container_name
Docker create will create the new container without running it.
The docker container create (or shorthand: docker create) command creates a new container from the specified image, without starting it.
When creating a container, the docker daemon creates a writeable
container layer over the specified image and prepares it for running
the specified command. The container ID is then printed to STDOUT.
This is similar to docker run -d except the container is never started.
So once container filesystem there run command to copy a file from docker to CI machine simple as that.
Docker copy command
docker cp container_name:/app/build/meta.json .
Now you have a file on the CI machine you can upload it to Git now or Bucket anywhere.

Docker -v command wipes the container

I am creating a docker container that will run a minecraft server. (Yes i know, these already exist). And of course i want the world to be saved when the container is turned off.
This is my dockerfile:
FROM anapsix/alpine-java
COPY ./ /home
CMD ["java","-jar","/home/main.jar"]
EXPOSE 25565
Then i build the container:
docker build -t minecraftdev .
Run the container:
docker run -dp 25565:25565 -v C:/Users/user/server:/home minecraftdev
And then the files in the image, server.properies, the server jar file and EULA.txt is wiped.
Is there another way i don't now of to get the container to store data? And this is without placing the files in the server folder.
Thank you for your answers, i was able to fix it by -v C:/Users/user/server/world:/home/world As the world files are stored in that folder, Instead of changing out all the files in the folder as i didn't know -v did.
Minecraft makes the server.jar file and i don't know how to change so it stores all the files in another place.

Source files are updated, but CMD does not reflect

I'm new to docker and am trying to dockerize an app I have. Here is the dockerfile I am using:
FROM golang:1.10
WORKDIR /go/src/github.com/myuser/pkg
ADD . .
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN dep ensure
CMD ["go", "run", "cmd/pkg/main.go"]
The issue I am running into is that I will update source files on my local machine with some log statements, rebuild the image, and try running it in a container. However, the CMD (go run cmd/pkg/main.go) will not reflect the changes I made.
I looked into the container filesystem and I see that the source files are updated and match what I have locally. But when I run go run cmd/pkg/main.go within the container, I don't see the log statements I added.
I've tried using the --no-cache option when building the image, but that doesn't seem to help. Is this a problem with the golang image, or my dockerfile setup?
UPDATE: I have found the issue. The issue is related to using dep for vendoring. The vendor folder had outdated files for my package because dep ensure was pulling them from github instead of locally. I will be moving to go 1.1 which support to go modules to fix this.
I see several things:
According to your Dockerfile
Maybe you need a dep init before dep ensure
Probably you need to check if main.go path is correct.
According to docker philosophy
In my humble opinion, you should create an image with docker build -t <your_image_name> ., executing that where your Dockerfile is, but without CMD line.
I would execute your go run <your main.go> in your docker run -d <your_image_name> go run <cmd/pkg/main.go> or whatever is your command.
If something is wrong, you can check exited containers with docker ps -a and furthermore check logs with docker logs <your_CONTAINER_name/id>
Other way to check logs is access to the container using bash and execute go run manually:
docker run -ti <your_image_name> bash
# go run blablabla

Enabling Project Functionality in a Docker Image of node_red

I have branched both the node red git repo and the node red docker image and am trying to modify the settings.js file to enable Projects Functionality. The settings file that ends up in the Docker Container does not seem to be my modified one. My aim is to use the Docker image in a Cloud Foundry environment.
https://github.com/andrewcgaitskellhs2/node-red-docker.git
https://github.com/andrewcgaitskellhs2/node-red.git
I am also trying to install git and ssh-keygen at the time of the Docker build to allow Projects to function. I have added these in the Package.json files for both the node red app and image git repos.
If I need to start from scratch, please let me know what steps I need take.
I would welcome guidance on this.
Thank you.
You should not be trying to install ssh-keygen and git via the package.json file.
You need to use the Node-RED Dockerfile as the base to build a new Docker container, in the Dockerfile you should use apt-get to install them and to include an edited version of the settings.js Something like this:
FROM nodered/node-red-docker
RUN apt-get install git ssh-client
COPY settings.js /data
ENV FLOWS=flows.json
ENV NODE_PATH=/usr/src/node-red/node_modules:/data/node_modules
CMD ["npm", "start", "--", "--userDir", "/data"]
Where settings.js is your edited version that is in the same directory as the Dockerfile
Edited following #knolleary's comment:
FROM nodered/node-red-docker
COPY settings.js /data
ENV FLOWS=flows.json
ENV NODE_PATH=/usr/src/node-red/node_modules:/data/node_modules
CMD ["npm", "start", "--", "--userDir", "/data"]
It is not necessary to change the image. For persistence, you will mount a host directory into the container at /data, e.g. like this:
docker run --rm -d -v /my/node/red/data:/data -p 1880:1880 nodered/node-red
A settings.js file will get created in your data directory, here /my/node/red/data. Edit this file to enable projects, then restart the container.
It is also possible to place a settings.js file with projects enabled into the directory that you mount to /data inside the container before the first start.

How do I dockerize an existing application...the basics

I am using windows and have boot2docker installed. I've downloaded images from docker hub and run basic commands. BUT
How do I take an existing application sitting on my local machine (lets just say it has one file index.php, for simplicity). How do I take that and put it into a docker image and run it?
Imagine you have the following existing python2 application "hello.py" with the following content:
print "hello"
You have to do the following things to dockerize this application:
Create a folder where you'd like to store your Dockerfile in.
Create a file named "Dockerfile"
The Dockerfile consists of several parts which you have to define as described below:
Like a VM, an image has an operating system. In this example, I use ubuntu 16.04. Thus, the first part of the Dockerfile is:
FROM ubuntu:16.04
Imagine you have a fresh Ubuntu - VM, now you have to install some things to get your application working, right? This is done by the next part of the Dockerfile:
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
For Docker, you have to create a working directory now in the image. The commands that you want to execute later on to start your application will search for files (like in our case the python file) in this directory. Thus, the next part of the Dockerfile creates a directory and defines this as the working directory:
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
As a next step, you copy the content of the folder where the Dockerfile is stored in to the image. In our example, the hello.py file is copied to the directory we created in the step above.
COPY . /usr/src/app
Finally, the following line executes the command "python hello.py" in your image:
CMD [ "python", "hello.py" ]
The complete Dockerfile looks like this:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y python
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
CMD [ "python", "hello.py" ]
Save the file and build the image by typing in the terminal:
$ docker build -t hello .
This will take some time. Afterwards, check if the image "hello" how we called it in the last line has been built successfully:
$ docker images
Run the image:
docker run hello
The output shout be "hello" in the terminal.
This is a first start. When you use Docker for web applications, you have to configure ports etc.
Your index.php is not really an application. The application is your Apache or nginx or even PHP's own server.
Because Docker uses features not available in the Windows core, you are running it inside an actual virtual machine. The only purpose for that would be training or preparing images for your real server environment.
There are two main concepts you need to understand for Docker: Images and Containers.
An image is a template composed of layers. Each layer contains only the differences between the previous layer and some offline system information. Each layer is fact an image. You should always make your image from an existing base, using the FROM directive in the Dockerfile (Reference docs at time of edit. Jan Vladimir Mostert's link is now a 404).
A container is an instance of an image, that has run or is currently running. When creating a container (a.k.a. running an image), you can map an internal directory from it to the outside. If there are files in both locations, the external directory override the one inside the image, but those files are not lost. To recover them you can commit a container to an image (preferably after stopping it), then launch a new container from the new image, without mapping that directory.
You'll need to build a docker image first, using a dockerFile, you'd probably setup apache on it, tell the dockerFile to copy your index.php file into your apache and expose a port.
See http://docs.docker.com/reference/builder/
See my other question for an example of a docker file:
Switching users inside Docker image to a non-root user (this is for copying over a .war file into tomcat, similar to copying a .php file into apache)
First off, you need to choose a platform to run your application (for instance, Ubuntu). Then install all the system tools/libraries necessary to run your application. This can be achieved by Dockerfile. Then, push Dockerfile and app to git or Bitbucket. Later, you can auto-build in the docker hub from github or Bitbucket. The later part of this tutorial here has more on that. If you know the basics just fast forward it to 50:00.

Resources