Newbie question: I have splash running in a docker container and scrapy running on my local development machine. I now need to promote this to an AWS environment via docker containers, but I can't figure out how to connect the scrapy and splash containers?
I'm assuming that I need to create a docker stack, but that's as far as I've got :o(
It was really quite straightforward in the end.
docker network create crawler-network
docker run --network=crawler-network --name=splash --hostname=splash --memory=6GB --restart unless-stopped -d -p 8050:8050 scrapinghub/splash --max-timeout 600 --slots 10
docker run --network=crawler-network --name=crawler --hostname=crawler -it conda
docker network inspect crawler-network
Then we changed the scrapy splash settings to point to http://splash:8060, instead of http://localhost:8050
Related
My goal is to run arbitrary GUI applications from Docker container using host Xserver.
I tried http://wiki.ros.org/docker/Tutorials/GUI#The_simple_way - Step 1
I would run the docker image using docker run --gpus all --net=host -it -p "8888:8888" -v "/home/gillian/Documents/deeplearning/:/deeplearning/:" --env=DISPLAY=$DISPLAY --env=QT_X11_NO_MITSHM=1 --volume=/tmp/.X11-unix:/tmp/.X11-unix:rw pytorch
But when I tried to run xlogo or xclock from within the container, it would always return error Error: Can't open display: :0
after spending the night trying to fix it I tried to use --net=host as an argument for docker run. And then I could run xclock and xlogo and it would display them on my screen without any issues.
Why?
What can I do to run the docker image without sacrificing the network isolation (--net=host)?
I am running Kubuntu 20.04
I'm trying to launch a GitLab or Gitea docker container in my QNAP NAS (Container Station) and, for some reason, when I restart the container, it won't start back up because files are lost (it seems).
For example, for GitLab it gives me errors saying runsvdir-start and gitlab-ctl don't exist. For Gitea it's the s6-supervise file.
Now I am launching the container like this, just to keep it simple:
docker run -d --privileged --restart always gitea/gitea:latest
A simple docker stop .... and docker start .... breaks it. How do I troubleshoot something like this?
QNAP has sent this issue to R&D and they were able to replicate it. It's a bug and will probably be fixed in a new Container Station update.
It's now fixed in QTS 4.3.6.20190906 and later.
Normal to lose you data if you launch just :
docker run -d --privileged --restart always gitea/gitea:latest
You should use VOLUME to sharing folder between your host and docker host for example :
docker run -d --privileged -v ./gitea:/data -p 3000:3000 -p 222:22 --restart always gitea/gitea:latest
Or use docker-compose.yml (see the official docs).
I'm new to docker.
I have an image that I want to run, but I want docker to see if that image is already running from another terminal...if it is running I don't want it to load another one...
is this something that can be done with docker?
if it helps, I'm running the docker with a privileged mode.
I've tried to search for singleton docker or something like that, but no luck.
updates-
1.working from ubuntu.
My scenario- from terminal X I run docker run Image_a
from terminal Y I run docker run Image_a
when trying to run from terminal Y, I want docker to check if there is already a docker running with Image_a, and the answer is true - I want docker not to run in terminal Y
You can use the following docker command to get all containers that running from specific image:
docker ps --filter ancestor="imagename:tag"
Example:
docker ps --filter ancestor="drone/drone:0.5"
Example Output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fb00087d4c1 drone/drone:0.5 "/drone agent" 6 days ago Up 26 minutes 8000/tcp drone_drone-agent_1
This approach uses docker api and docker daemon, so it doesnt matter if the run command executed in background or other terminal.
Aother approach:
If you have a single container form a single image:
Try naming your containers, You cant have 2 containers with the same name:
docker run --name uniquecontainer Image_a
Next time you run the above command you will get an error. Btw consider using -d so you dont have to switch terminals.
docker run -d --name uniquecontainer Image_a
I'm on Docker for Mac 1.13.0
When I run docker run -d cassandra:3.9 each time it kills previous container. I can not start more than one instance. It works as expected with nginx:latest though, i.e. starts multiple instances.
Why is it happening?
give it a different name
docker run -d --name cassandra1 cassandra:3.9
and then
docker run -d --name cassandra2 cassandra:3.9
I suspect this might be a docker 1.13 problem. I dont see this issue on docker 1.12.6 on mac.
I'd like to dockerize my Strongloop Loopback based Node server and start using Process Manager(PM) to keep it running.
I've been using RancherOS on AWS which rocks.
I copied (but didn't add anything to) the following Dockerfile as a template for my own Dockerfile:
https://hub.docker.com/r/strongloop/strong-pm/~/dockerfile/
I then:
docker build -t somename .
(Dockerfile is in .)
It now appears in:
docker images
But when I try to start it, exits right away:
docker run --detach --restart=no --publish 8701:8701 --publish 3001:3001 --publish 3002:3002 --publish 3003:3003 somename
AND if I run the strong-pm image and after opening ports on AWS, it works as above with strongloop/strong-pm not somename
(I can browse aws-instance:8701/explorer)
Also, these instructions to deploy my app https://strongloop.com/strongblog/run-create-node-js-process-manager-docker-images/ require:
slc deploy http://docker-host:8701/
but Rancher doesn't come with npm (or curl) installed and when I bash into the vm, slc isn't installed, so seems like slc needs to be "outside" the vm
docker exec -it fb94ddab6baa bash
If you're still reading, nice. I think I'm trying to add a Dockerfile to my git repo that will deploy my app server (including pulling code from repos) on any docker box.
The workflow for the strongloop/strong-pm docker image assumes you are deploying to it from a workstation. The footprint for npm install -g strongloop is significantly larger than strong-pm alone, which is why the docker image has only strong-pm installed in it.