I had a running setup of containers for owncloud, jwilder/nginx-proxy and JrCs/docker-letsencrypt-nginx-proxy-companion for my own cloud. Since I couldn't change the settings afterwards to accept larger files than 2MB I tried to set the hole thing up completely new.
Yet, for some reason I can't even get the standard configuration (without the 2MB limit) working again...
Could you help me here real quick?
First, I started the nginx-proxy:
docker run -d -p 80:80 -p 443:443 --name MY_PROXY_NAME1 -v /path/to/my/certs:/etc/nginx/certs:ro -v /etc/nginx/vhost.d -v /usr/share/nginx/html -v /var/run/docker.sock:/tmp/docker.sock:ro --label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy jwilder/nginx-proxy
Second, I started the lets encrypt companion:
docker run -d --name MY_PROXY_NAME2 -v /path/to/my/certs:/etc/nginx/certs:rw -v /var/run/docker.sock:/var/run/docker.sock:ro --volumes-from MY_PROXY_NAME1 jrcs/letsencrypt-nginx-proxy-companion
Checking afterwards, both containers seem to run:
someID jrcs/letsencrypt-nginx-proxy-companion "/bin/bash /app/en..." 6 minutes ago Up 6 minutes MY_PROXY_NAME2
someID jwilder/nginx-proxy "/app/docker-entry..." 7 minutes ago Up 7 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp MY_PROXY_NAME1
But when starting any test container or the owncloud container, the browser simply tells me no secure connection could be established...
Here an example on how I started the owncloud container:
docker run -v /path/to/my/data:/usr/share/webapps/owncloud/data --link mysql:mysql -e "VIRTUAL_HOST=my.domain.com" -e "LETSENCRYPT_HOST=my.domain.com" -e "LETSENCRYPT_EMAIL=my#email.com" --name Owncloud -d --expose 80 --expose 443 owncloud:latest
Do you have any ideas why it isn't working? I got furious since I had it running at some point and can't figure out why its not working this time... (if I start the old proxy and owncloud containers I can still reach them, so no network problems or other issues)
So thanks a lot!
Related
When I run Docker from command line I do the following:
docker run -it -d --rm --hostname rabbit1 --name rabbit1 -p 127.0.0.1:8000:5672 -p 127.0.0.1:8001:15672 rabbitmq:3-management
I publish the ports with -p in order to see the connection on the host.
How can I do this automatically with a Dockerfile?
The Dockerfile provides the instructions used to build the docker image.
The docker run command provides instructions used to run a container from a docker image.
How can I do this automatically with a Dockerfile
You don't.
Port publishing is something you configure only when starting a container.
You cant specify ports in Dockerfile but you can use docker-compose to achieve that.
Docker Compose is a tool for running multi-container applications on Docker.
example for docker-compose.yml with ports:
version: "3.8"
services :
rabbit1:
image : mongo
container_name : rabbitmq:3-management
ports:
- 8000:5672
- 8001:15672
I am new to docker, trying to run a pulled docker image.
docker images gives this:
REPOSITORY TAG IMAGE ID CREATED SIZE
openmined/grid-network development f760520b2550 8 days ago 785MB
openmined/grid-node development 89a4d0202703 8 days ago 3.48GB
I ran the pulled images following this link, by using command: docker run -i -t f760520b2550 but found this error:
Error: '' is not a valid port number.
I tried playing with the flags like docker run -i -t f760520b2550 -p 8080:8080, but didn't help.
I have only installed docker recently and have done no changes in configurations. Can someone help me with this error?
I faced a similar problem working with a docker image. What worked for me was making the following change in the dockerfile to build the docker image with
--bind 0.0.0.0:8080 instead of --bind :$PORT
--bind :$PORT does work in a cloud build, but doesn't work in a docker run.
Don't know the reason.
To expose ports using docker-compose
version: '3'
services:
grid-network:
image: openmined/grid-network:development
ports:
- "8080:8080"
- "8001:8001"
Then docker-compose up -d
I have two docker images. One is jobservice and another one is redis. I tried to link the redis container into my job service container by using link command.
The error is unable to find the docker image.
I removed the link command then it is working fine.
Two docker images
$ docker images ls
gcr.io/sighmo-development/jobservice 1.0.1 f0a1a4458f89 11 seconds ago 874MB
redis latest f7302e4ab3a8 2 weeks ago 98.2MB
Docker ps command
$ docker ps
848cf2992a34 redis "docker-entrypoint.s…" 8 hours ago Up 8 hours 6379/tcp some-redis
docker command to run jobservice
$ docker run -d \
--env-file /home/amareswaran_cloud/lookmyjobs-repo/LOOK_MY_JOBS/docker-env/env.list \
-v /home/amareswaran_cloud/lookmyjobs-volume/jobservice:/home/ssl --name=jobservice \
--link discovery:discovery \
--link sc_kafka:kafka \
--link scdb:scdb \
--link sc_redis:some-redis \
gcr.io/sighmo-development/jobservice:1.0.1
Expected is docker command should link with redis. But actual is docker image not found.
You have the container name and alias reversed. The container name should be first, and according to docker ps, your container is named some-redis:
--link some-redis:sc_redis
Seems you're running different containers, not arranged by a Compose file and I strongly suggest you to use it for a several number of reasons:
you can achieve IaC (Infrastructure as Code) and commit it in a human-readable form
you can highly reproduce it just with a single command (docker-compose up), along with tear down (docker-compose down)
you can use with ease Docker network in order to avoid the use of link feature that is deprecated
In the end, it looks I'm missing some useful information to translate your current deployment to a Compose-based reference (I'm referring to sc_kafka,scdb and sc_redis), so YMMV but it should work enough adding required services.
First of all, ensure you got installed docker-compose in your path and put the content of this file in your working directory (I suppose /home/amareswaran_cloud/lookmyjobs-repo).
version: '3.7'
services:
redis:
image: redis:latest
sc_kafka:
image: <KAFKA_IMAGE>
scredis:
image: <REDIS_IMAGE>
scdb:
image: <DB_IMAGE>
jobservice:
image: gcr.io/sighmo-development/jobservice:1.0.1
env_file:
- ./LOOK_MY_JOBS/docker-env/env.list
volumes:
- ./../lookmyjobs-volume/jobservice:/home/ssl
With this simple Compose, all containers are linked to each one, just need to use the {SERVICE_NAME} DNS name and there you go.
An additional feature could be to set up several networks in order to segregate services at its best but that's a next step you can achieve on your own later.
I am trying to start an ASP.NET Core container hosting a website.
It does not exposes the ports when using the following command line
docker run my-image-name -d -p --expose 80
or
docker run my-image-name -d -p 80
Upon startup, the log will show :
Now listening on: http://[::]:80
So I assume the application is not bound to a specific address.
But does work when using the following docker compose file
version: '0.1'
services:
website:
container_name: "aspnetcore-website"
image: aspnetcoredocker
ports:
- '80:80'
expose:
- '80'
You need to make sure to pass all options (-d -p 80) to the docker command before naming the image as described in the docker run docs. The notation is:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
So please try the following:
docker run -d -p 80 my-image-name
Otherwise the parameters are used as command/args inside the container. So basically running your entrypoint of the docker image with the additional params of -d -p 80 instead of passing them to the docker command itself. So in your example the docker daemon is just not receiving the params -d and -p 80 and thus not mapping the port to the host. You can also notice that by not receiving the -d the command runs in the foreground and you see the logs in your terminal.
I have an image with MYSQL installed. I need to map the /var/lib/mysql directory to my host system. Following is the screenshot that I see within that directory, when I use the following command
docker run --rm -it --env-file=envProxy --network mynetwork --name my_db_dev -p 3306:3306 my_db /bin/bash
Now when I try to mount a directory from my host ( Windows 10 ), by running another container from the same image, the mysql directory is blank.
docker run --rm -it --env-file=envProxy --network mynetwork -v D:/docker/data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
Also tried this, but none works
docker run --rm -it --env-file=envProxy --network mynetwork -v D:\docker\data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
One thing that I see, is that the mysql directory in the path has now root user, instead of mysql as in the previous case.
I wanted all the content from the existing container (mysql directory ) to be copied back to the host mount directory
Is that Possible ? and How can that be achieved ?
Same problem on Docker Desktop(2.0.0.3 (31259)). I'd got the solution from this issues.
I ensured the containers were stopped, opened docker settings, selected "Shared Drives", removed the tick on "C" and added it again. Docker asked for the Windows account credentials and I entered the new ones. After that and starting containers, mount volumes were ok. Problem solved.
It could fix the problem more simply by just reset the credentials in Docker Settings.
If you need to get files from container into host, better use docker cp command: https://docs.docker.com/engine/reference/commandline/cp/
It will look like:
docker cp my_db_dev1:/var/lib/mysql d:\docker\data
UPD
Actually I want to persist the database files across other containers,
so I wanted use volumes
In this case you have to:
Start using docker-compose to orchestrate containers.
In docker-compose.yml you create volume, which is shared between all containers. Something like:
docker-compose.yml
version: '3'
services:
db1:
image: whatever
volumes:
- myvol:/data
db2:
image: whatever2
volumes:
- myvol:/data
volumes:
myvol:
Description: https://docs.docker.com/compose/compose-file/#volume-configuration-reference
Use Windows paths writing with backslash '\' and it is recommended using variables to specify path. On the other side for Linux use slash '/' For example:
docker run -it -v %userprofile%\work\myproj\some-data:/var/data
First create a folder structure like below,
C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG
C:\Users\rajit\MYSQL_DATA\DATA_DIR
then please adjust like below,
docker pull mysql:8.0
docker run --name mysql-docker -v C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG:/etc/mysql/conf.d --env="MYSQL_ROOT_PASSWORD=root" --env="MYSQL_PASSWORD=root" --env="MYSQL_DATABASE=test_db" -v C:\Users\rajit\MYSQL_DATA\DATA_DIR:/var/lib/mysql -d -p 3306:3306 mysql:8.0 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
try to turn off anti virus program or fire wall. Then click on "reset credentials" under settings/shared drives.
That worked for me.
Best regards.