Commands are not working in Ubuntu container - docker

I have created a container using the following command: docker container run -i ubuntu. However, when I try to run a command within the container, such as cd, I get the following error: bash: line 1: cd: $'bin\r': No such file or directory. What could be the issue?

When you docker run an image, or use an image in a Dockerfile FROM line, or name an image: in a Docker Compose setup, Docker first checks to see if you have that image locally. If you have that image, Docker just uses it without checking Docker Hub or the other upstream registry.
Meanwhile, you can docker build or docker tag an image with any name you want...even a name that matches an official Docker Hub image.
You mention in a comment that you at some point did run docker build -t ubuntu .... That replaces the ubuntu image with what you built, so when you later docker run ubuntu, it's running your modified image and not the official Docker Hub Ubuntu image.
This is straightforward to fix. If you
docker rmi ubuntu
it will delete your local (modified) copy, and the next time you use it, Docker will automatically pull it from Docker Hub. It should also work to
# Explicitly get the Docker Hub copy of the image
docker pull ubuntu
# Build a custom image, pulling whatever's in the FROM line
docker build --pull -t my/image .
(You can also hit this in a Docker Compose setup if you specify both image: and build:; this instructs Compose on an explicit name to use for the built image. You do not need to repeat the FROM line in image:, and it causes trouble if you do. The resolution is the same as described above. I might leave image: out entirely unless you're planning to push the image to a registry.)

Related

Update Jenkins running on docker container

I have a Jenkins running on Docker container and local docker registry.
docker-compose up
it does't go out side network rather pulls the image from local registry.
Is there a way i can update my local docker registry with latest Jenkins image?
And when i run docker-compose up i have latest Jenkins? Thank you!
So by default docker nature is always look for image available on host-machine and if not specific tag is provided it will search for default tag which is latest .
So in your case when you already have latest jenkins image available on host docker-compose will always use that image pretending this is the latest one. For using latest image available from registry you need to clean/delete jenkins image from your host who is having latest tag.
Delete and use latest jenkins image:
docker rmi -f jenkins:latest
docker-compose stop
docker-compose rm -f
docker-compose pull
docker-compose up -d
this is your private registry so you need to login to it:
docker login my-server.test:5000

Updating a docker image without the original Dockerfile

I am working on Flask app running on ec2 server inside a docker image.
The old dev seems to have removed the original Dockerfile, and I can't seem to find any instructions on a way to push my changes into to the docker image with out the original.
I can copy my changes manually using:
docker cp newChanges.py doc:/root/doc/server_python/
but I can't seem to find a way to restart flask. I know this is not the ideal solution but it's the only idea I have.
There is one way to add newChanges.py to existing image and commit that image with a new tag so you will have a fall back option if you face any issue.
Suppose you run alpine official image and you don't have DockerFile
Everytime you restart the image you will not have your newChanges.py
docker run --rm -name alpine alpine
Use ls inside the image to see a list of existing files that are created in Dockerfile.
docker cp newChanges.py alpine:/
Run ls and verify your file was copied over
Next Step
To commit these changes to your running container do the following:
Docker ps
Get the container ID and run:
docker commit 4efdd58eea8a updated_alpine_image
Now run your alpine image and you will the see the updated changes as suppose
docker run -it updated_alpine_image
This is what you will see in your update_alpine_image with having DockerFile
This is how you can rebuild the image from existing image. You can also try #uncletall answer as well.
If you just want to restart after docker cp, you can just docker stop $your_container, then docker start $your_container.
If you want to update newChanges.py to docker image without original Dockerfile, you can use docker export -o $your_tar_name.tar $your_container, then docker import $your_tar_name.tar $your_new_image:tag. Later, always reserve the tar to backup server for future use.
If you want continue to develop later use a Dockerfile in the future for further changes:
you can use docker commit to generate a new image, and use docker push to push it to dockerhub with the name something like my_docker_id/my_image_name:v1.0
Your new Dockerfile:
FROM my_docker_id/my_image_name:v1.0
# your new thing here
ADD another_new_change.py /root/
# others
You can try to examine the history of the image, from there you can probably re-create the Dockerfile. Try using docker history --no-trunc image-name
See this answer for more details

Docker inside docker with gitlab-ci.yml

I have created a gitlab runner.
I have choosen docker executor and an ubuntu default image.
I have put this at the top of my .gitlab-ci.yml file:
image: microsoft/dotnet:latest
I was thinking that gitlab-ci will load ubuntu image by default if there are no "images" directive in .gitlab-ci.yml file.
But, there is something strange: I am wondering now if gitlab-ci is not creating an ubuntu container and then creating a dotnet container inside the ubuntu container.
Here is a very ugly test i have done on gitlab server: I have removed /usr/bin/docker file and i have replaced it by a script which logs arguments.
This is very strange because jobs still working and i have nothing in my log file....
Thanks
Ubuntu image is indeed used if you didn't specify image but you did and your jobs should be run on the dotnet container without ever spinning up the ubuntu.
Your test behaves the way it does because docker is the client while dockerd is the deamon that gitlab runner actually calls.
If you want to check what's going on you should rather call docker ps to get a list of running containers.

Docker Swarm Deploy a local Dockerfile

I am trying to deploy a stack of services in a swarm on a local machine for testing purpose and i want to build the docker image whenever i run or deploy a stack from the manager node.
Is it possible what I am trying to achieve..
On Docker Swarm you can't build an image specified in a Docker Compose file:
Note: This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file. The docker stack command accepts only pre-built images. - from docker docs
You need to create the image with docker build (on the folder where the Dockerfile is located):
docker build -t imagename --no-cache .
After this command the image (named imagename) is now available on the local registry.
You can use this image on your Docker Compose file like the following:
version: '3'
services:
example-service:
image: imagename:latest
You need to build the image with docker build. Docker swarm doesn't work with tags to identify images. Instead it remembers the image id (hash) of an image when executing stack deploy, because a tag might change later on but the hash never changes.
Therefore you should reference the hash of your image as shown by docker image ls so that docker swarm will not try to find your image on some registry.
version: '3'
services:
example-service:
image: imagename:97bfeeb4b649
While updating a local image you will get an error as below
image IMAGENAME:latest could not be accessed on a registry to record
its digest. Each node will access IMAGENAME:latest independently,
possibly leading to different nodes running different
versions of the image.
To overcome this issue start the service forcefully as follows
docker service update --image IMAGENAME:latest --force Service Name
In the above example it is as
docker service update --image imagename:97bfeeb4b649 --force Service Name

What's the difference between the docker commands: run, build, and create

I see there are three docker commands that seem to do very similar things:
docker build
docker create
docker run
What are the differences between these commands?
docker build builds a new image from the source code.
docker create creates a writeable container from the image and prepares it for running.
docker run creates the container (same as docker create) and runs it.
docker build . converts your Dockerfile into an image.
docker create your-image creates a container from your image from step 1.
docker start container_id starts the container from step 2.
docker run image is a shortcut for 2. and 3. (docker create image and
docker start container_id).
Here is the difference between image and container:
Image
An image is a specified snapshot of your filesystem and includes the starting command of your container. An image occupies just disk-space, it does not occupy memory/cpu. To create an image you usually create instructions how to build that image in aDockerfile. FROM and RUN commands in the docker file create the file-snapshot. One may build an image from a docker file with docker build <dockerfile>
Container
You can create new containers with an image. Each container has a file-snapshot which is based on the file-snapshot created by the image. If you start a container it will run the command you specified in your docker file CMD and will use part of your memory and cpu. You can start or stop a container. If you create a container, its not started by default. This means you can't communicate to the container via ports etc. You have to start it first. One may create an container from an image by docker create <image>. When a container has been created it shows the id in the terminal. One may start it with docker start <container_id>.

Resources