I just started learning Docker and trying to build a C++ project for Windows on Ubuntu.
For that I use this project which almost works, but I have linking error, particularly it fails to link against libssh.
I run the build of my project using this command:
sudo docker run -v $PWD:/project/source -v $PWD/build_docker:/project/build my_qt_cross_win:qttools
where my_qt_cross_win:qttools is the image that I built by git cloning original repo and added some missing libraries.
Since building it takes 2 hours, because it builds the whole system, and I just need to fix this minor linking issue, I would like to just add the proper libssh.a to the container that was instanced from my_qt_cross_win:qttools image and build my project using that modified container. But it feels like I can use only images for that, because docker complains
Unable to find image 'musing_chebyshev:latest' locally
when I try to use container name or id instead of an image.
$ sudo docker container ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d207fd2d9dd8 my_qt_cross_win:qttools "/bin/sh -c 'qmake /…" 11 minutes ago Exited (2) 10 minutes ago musing_chebyshev
Is there any way I can use a modified container to build my project?
I simply needed to copy the missing file to the existing container:
sudo docker cp -a d207fd2d9dd8:/usr/lib/x86_64-linux-gnu/libssh.a .
And then commit the changes to the container to create a new image:
sudo docker commit d207fd2d9dd8 my_qt_cross_win/libssh_static
Then the new image appeared:
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
my_qt_cross_win/libssh_static latest 9f7af733526b 21 hours ago 3.43GB
Though it did not solve my original issue, this was the answer I expected to get to reuse my existing container for build. Thanks to #AlanBirtles for the link.
Related
I am very lost on the steps with gcloud verse docker. I have some gradle code that built a docker image and I see it in images like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/prod-stock-bot/stockstuff latest b041e2925ee5 27 minutes ago 254MB
I am unclear if I need to run docker push or not or if I can go strait to gcloud run deploy so I try a docker push like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ docker push gcr.io/prod-stockstuff-bot/stockstuff
Using default tag: latest
The push refers to repository [gcr.io/prod-stockstuff-bot/stockstuff]
An image does not exist locally with the tag: gcr.io/prod-stockstuff-bot/stockstuff
I have no idea why it says the image doesn't exist locally when I keep listing the image. I move on to just trying gcloud run deploy like so
(base) Deans-MacBook-Pro:stockstuff-all dean$ gcloud run deploy stockstuff --project prod-stock-bot --region us-west1 --image gcr.io/prod-stockstuff-bot/stockstuff --platform managed
Deploying container to Cloud Run service [stockstuff] in project [prod-stock-bot] region [us-west1]
X Deploying... Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
X Creating Revision... Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
. Routing traffic...
Deployment failed
ERROR: (gcloud.run.deploy) Image 'gcr.io/prod-stockstuff-bot/stockstuff' not found.
I am doing this all as a playground project and can't seem to even get a cloud run deploy up and running.
I tried the spring example but that didn't even create a docker image and it failed anyways with
ERROR: (gcloud.run.deploy) Missing required argument [--image]: Requires a container image to deploy (e.g. `gcr.io/cloudrun/hello:latest`) if no build source is provided.
This error occurs when an image is not tagged locally/correctly. Steps you can try on your side.
Create image locally with name stockstuff (do not prefix it with gcr and project name while creating).
Tag image with gcr repo detail
$ docker tag stockstuff:latest gcr.io/prod-stockstuff-bot/stockstuff:latest
Check if your image is available in GCR (must see your image here, before deploying on cloudrun).
$ gcloud container images list --repository gcr.io/prod-stockstuff-bot
If you can see your image in list, next you can try to deploy gcloud run with below command (as yours).
gcloud run deploy stockstuff --project prod-stock-bot --region us-west1 --image gcr.io/prod-stockstuff-bot/stockstuff --platform managed
There are 3 contexts that you need to be aware.
Your local station, with your own docker.
The cloud based Google Container Registry: https://console.cloud.google.com/gcr/
Cloud Run product from GCP
So the steps would be:
Build your container either locally or using Cloud Build
Push the container to the GCR registry,
if you built locally
docker tag busybox gcr.io/my-project/busybox
docker push gcr.io/my-project/busybox
Deploy to Cloud Run a container from Google Cloud Repository.
I don't see this image gcr.io/prod-stockstuff-bot/stockstuff when you list images in the local system. You can create a new image by tagging that image with this image gcr.io/prod-stock-bot/stockstuff and re-run the gcloud run command.
for the context I am using Flask (python)
I solved this by
update gcloud-sdk to the latest version
gcloud components update
add .dockerignore, I'm guessing because of the python cache
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
expose the port to env $PORT
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 app:app
We were trying to build and run docker-compose project on remote host. I tried using:
docker-compose -H 'ssh://remote_address' up --build
And got
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
So we tried:
docker-compose -H 'ssh://remote_address' build
docker-compose -H 'ssh://remote_address' up
Which worked fine. My problem is I can't find evidence in docs for this to be correct behaviour. Is this a bug in docker-compose, a feature, or a bug in my environment?
I'm not sure of the error you got for the first command, I mean docker-compose -H 'ssh://ip' up --build as it may be really a but, but the three mentioned commands have surely differences. I'll try to explain in my simple way:
First command is docker-compose up --build.
This command finds docker-compose file and rebuilds the image then make it running. Suppose you have made some changes into your docker-compose file, so when you only run docker-compose, you'll get a warning that image is not rebuilt, you should run docker-compose up --build to rebuild it and make everything be built again (despite something done before and present in cache).
Second command is docker-compose build.
This command only builds your image based on docker-compose, but does not run it. You can see the built image by docker image ls or docker images. Also executing docker ps -a should not see your recent built image running.
Third and the last command is docker-compose up.
If this command is entered for the first time, it tries to run everything in Dockerfile if exists and download base image, etc. Then makes the image and runs the container.
If the image has been built before, it just runs it.
Unlike the first command, the third one only runs the latest build of that image, but would not build it again.
I am working on single node kubernetes cluster built with kubeadm. During development, create a new docker image, but the image will be deleted immediately without permission from kubernetes garbage collection. How do I control this?
Environment:
kubeadm version: v1.17.2
kubelet version: v1.17.2
docker version: 19.03.5
Ubuntu 18.04 desktop
Linux kernel version: 4.15.0-74-generic
I created an image with the docker build command on master node, and
confirmed that the image was deleted immediately with docker
container ls -a. If I running Docker only, the images have not been
deleted. So I guess the reason for the removal was due to the
kubernetes garbage collection. – MASH 3 hours ago
Honestly, I doubt that your recently build docker image could've been deleted by kubernetes garbage collector.
I think you are confusing two concepts: image and stopped container. If you want to check your local images, you should use docker image ls command, not docker container ls -a. The last one doesn't say anything about available images and doesn't prove that any image was deleted.
It's totally normal behaviour of Docker. Please look at this example from docker docs:
We build a new docker image using following commands:
# create a directory to work in
mkdir example
cd example
# create an example file
touch somefile.txt
# build an image using the current directory as context, and a Dockerfile passed through stdin
docker build -t myimage:latest -f- . <<EOF
FROM busybox
COPY somefile.txt .
RUN cat /somefile.txt
EOF
After successful build:
Sending build context to Docker daemon 2.103kB
Step 1/3 : FROM busybox
---> 020584afccce
Step 2/3 : COPY somefile.txt .
---> 216f8119a0e6
Step 3/3 : RUN cat /somefile.txt
---> Running in 90cbaa24838c
Removing intermediate container 90cbaa24838c
---> b1e6c2284368
Successfully built b1e6c2284368
Successfully tagged myimage:latest
we run:
$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
As you can see there's nothing there and it's totally ok.
But when you run docker image ls command instead, you'll see our recently build image:
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
myimage latest b1e6c2284368 10 seconds ago 1.22MB
I had created a docker container image for Business Central 2 months ago. Now when I try to start the container it starts with an unhealthy status and Business Central client doesn't work.
docker start <container-id>
I checked the logs that told me that I am trying to run a container which is more than 90 days old.
Initializing...
Restarting Container
PublicDnsName unchanged
Hostname is MyBCDev
PublicDnsName is MyBCDev
You are trying to run a container which is more than 90 days old.
Microsoft recommends that you always run the latest version of our containers.
Set the environment variable ACCEPT_OUTDATED to 'Y' if you want to run this container anyway.
at , C:\Run\navstart.ps1: line 54
at , C:\Run\start.ps1: line 121
at , : line 1
I googled the issue and all I can find is to use the docker run command with accept outdated parameter, but that creates a new container. Whereas I want to start the existing container.
docker run --env accept_eula=Y --memory 4G microsoft/dynamics-nav
How can I start an existing docker container that is more than 90 days old?
Update
I did the docker commit using the existing container and repository:tag. But when I ran the container (docker run) using the new image it got stuck somewhere in the middle
Try to set ACCEPT_OUTDATED=Y and start the container. If it doesn't worked then try this hack.
Make use of docker commit command.
docker commit container-id myimage:v1
This will create new docker image out of that stopped container with all the data and config in it.
Run a new docker container out of that image.
This new docker container will be almost same as that of stopped docker container that was 90 days old.
Hope this helps.
You should set ACCEPT_OUTDATED=Y
docker run -e ACCEPT_EULA=Y -e ACCEPT_OUTDATED=Y --memory 4G microsoft/dynamics-nav
I'm following the instructions to install CKAN using Docker from http://docs.ckan.org/en/ckan-2.5.7/maintaining/installing/install-using-docker.html
However, after running ckan/ckan it will start for a second and stop immediately. After checking the container log I can see following error:
Distribution already installed:
ckan 2.8.0a0 from /usr/lib/ckan/venv/src/ckan
Creating /etc/ckan/ckan.ini
Now you should edit the config files
/etc/ckan/ckan.ini
ERROR: no CKAN_SQLALCHEMY_URL specified in docker-compose.yml
I have tried googling this and noticed people are having issues with installing CKAN using Docker but not this exact error.
I've just run into the same error. My solution was to use a previous commit as it seems the support for docker in the current / recent version it's broken. Ensure you remove all the docker containers and images first, then in the CKAN directory, checkout a working commit:
git checkout 7506353
...ensure all the reminant docker components are gone:
docker container prune
docker image prune
docker network prune
docker volume prune
And before you run the docker-compose build command, from your CKAN installation, open the following file:
ckan/migration/versions/000_add_existing_tables.py
..on line 8 (https://github.com/ckan/ckan/blob/master/ckan/migration/versions/001_add_existing_tables.py) add schema='public' as shown below:
meta = MetaData(schema='public')