I've got an issue with running Cypress tests off a container.
In my docker-compose I've following services: users, client, users-db, nginx and they're all running.
my baseUrl is set to "http://nginx" because I read somewhere that I need to reference
the service the server is running on which in my case is nginx.
I've tried also "http://localhost" and "http://client"
but when I run docker run -it -v $PWD:/e2e -w /e2e cypress/included:4.11.0 I keep getting Cypress could not verify that this server is running
Any feedback that will lead to resolution is much appreciated.
This happens because your cypress container is not part of the network used by the docker-compose containers.
Use docker network ls to see the list of available networks. There should be one named <docker_compose.yml's directory>_default.
You can then run a container in a specific network by specifying the --network <network_name> flag in docker run.
This container should be able to talk with the compose containers by using their service name as the hostname
My issue is I have
network_mode: host
in my docker-compose file. Removing it made it pass.
Related
I'm having a horrible time with setting up my Docker configuration for my go service. Below is an overview of my setup
go_binary(
name = "main_arm64",
embed = [":server_lib"],
goarch = "arm64",
goos = "linux",
visibility = ["//visibility:public"],
)
container_image(
name = "ww_server_image",
base = "#go_image_static_arm64//image",
entrypoint = ["/main_arm64"],
files = [":main_arm64"],
ports = [
"8080",
"3306",
],
)
I have a GraphQL Playgroud (HTTP) running on http://localhost:8080, and despite the port supposedly being exposed, i cant access the playground UI.
All I'm trying to do is:
Be able to access the GraphQL playground and any other APIs running on other ports within the container
Be able to make requests from my Dockerized Go app to a separate MySQL container (I can't figure out how to put them on the same network with rules_docker).
docker exec -it ... /bin/bash into my docker container (this hasnt been working because bash isnt installed, yet i have no idea how to install bash via this container_image command)
Here is the error:
OCI runtime exec failed: exec failed: unable to start container process: exec: "bash": executable file not found in $PATH: unknown
If i take the generated docker image ID and run docker run -p 8080:8080 IMAGE_ID, I'm able to access the GraphQL playground, but can't communicate with the MySQL container
If I change the network as such: docker run --network=host -p 8080:8080 IMAGE_ID the Dockerized Go app can successfully communicate with the MySQL container, but then the GraphQL playground becomes inaccessible. The GraphQL playground only becomes accessible if I maintain --network=bridge. I'm not sure why MySQL isn't using bridge as well, since i never specified the network when starting it. This is how I got the MySQL container
docker run -p 3306:3306 --name my-db -e MYSQL_ROOT_PASSWORD=testing -d mysql:8.0.31
So, you have several problems here:
First of all, you can most likely access containers that don't have bash installed through docker exec -it container_name /bin/sh, as most containers at least come with sh.
Second, your host machine can only have one service per port, so when you assign network host to a container, you overwrite the port mapping of other containers, which is why your GraphQL became unreachable after starting the go app with network host as a result of that they both use port 8080.
Third, when you use the default bridge network, your containers can only communicate by IP, not by container name.
Also, you don't need a port mapping to let the containers communicate with each other. Port mapping is only required, when something outside the Docker network needs the access.
Your best chance is to create a network with docker network create network_name and then to assign the network to all containers with --network network_name through the Docker run command.
You don't necessarily need a port mapping to get your application running, but when you want to access a service from outside - e.g. your hosts browser - make sure to take a unique port for each container. The port outside doesn't have to be the same as the container's internal port, you can map for instance -p 8081:8080.
Since all containers belong to one app, you also might want to check whether docker compose is the better alternative as it allows you to easily manage all your container by one config file.
the answer was here:
Unable to connect to mysql server with go and docker - dial tcp 127.0.0.1:3306: connect: connection refused
turns out i need to actually access MySQL using the following address, since Docker on Mac uses Linux VM:
docker.for.mac.localhost:3306
My problem is specific to k6 and InfluxDB, but i think the root cause is more general.
I'm using the official k6 distribution and its docker-compose.yml to run Grafana and InfluxDB which i start with the docker-compose up -d influxdb grafana command.
Grafana dashboard is accessible from localhost:3000, but running k6 with the recommended command $ docker run -i loadimpact/k6 run --out influxdb=http://localhost:8086/myk6db - <script.js (following this guide) k6 throws the following error (on Linux and MacOS as well):
level=error msg="InfluxDB: Couldn't write stats" error="Post \"http://localhost:8086/write?consistency=&db=myk6db&precision=ns&rp=\": dial tcp 127.0.0.1:8086: connect: connection refused"
I tried the command with localhost and 127.0.0.1 as well for InfluxDB as well. Also with IP addresses returned by docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' k6_influxdb_1 It either failed with the above error or didnt work, which means k6 didn't complain, but no data appeared in InfluxDB.
However, if i query the "internal IP" address which the network interface is using (with ifconfig command) and use that IP (192.168.1.66), everything works fine:
docker run -i loadimpact/k6 run --out influxdb=http://192.168.1.66:8086/k6db - <test.js
So my questions are:
Why does Grafana work fine with localhost:3000 and InfluxDB with localhost:8086 doesn't?
Why does only the "internal IP" work and no other IP?
I know there is a similar question, but that doesn't answer mine.
Docker containers run in an isolated network space. Docker can maintain internal networks, and there is Compose syntax to create them.
If you're making a call to a Docker container from outside Docker space but on the same host, you can usually connect to it as localhost, and the first port number listed in the Compose ports: section. If you look at the docker-compose.yml file you link to, it lists ports: [3000:3000], so port 3000 on the host forwards to port 3000 in the container; and if you're calling http://localhost:3000 from a browser on the same host, that will reach that forwarded port.
Otherwise, calls from one container to another can generally use the container's name (as in docker run --name) or the Compose service name; but, they must be on the same Docker network. That docker-compose.yml file also lists
services:
influxdb:
networks:
- k6
- grafana
so you can reach http://influxdb:8086 using the service's normal port number, provided the calling container is on one of those two networks. If the service has ports:, they're not considered for inter-container calls.
In the Docker documentation, Networking in Compose has more details on this setup.
There's one final trick that can help you run the specific command you're trying to run. docker-compose run will run a one-off command, using the setup for some container in the docker-compose.yml, except without its ports: and replacing its command:. The docker-compose.yml file you reference includes a k6 container, on the k6 network, running the loadimpact/k6 image. So you can probably run
docker-compose run k6 \
run --out influxdb=http://influxdb:8086/myk6db - \
<script.js
(And probably the K6_OUT environment variable in the docker-compose.yml can supply that --out option for you.)
You shouldn't ever need to look up the container-private IP addresses. They aren't usable in a variety of common scenarios, and in between Docker networking for calls between containers and published ports for calls from outside Docker, there are better ways to make calls to containers.
WIthin a Docker container, I would like to connect to a MySQL database that resides on the local network. However, I get errors because it can not find the host name, so my current hot fix is to hardcode the IP (which is bound to change at some time).
Hence; is it possible to forward a hostname from the host machine to the Docker container at docker run?
Yes, it is possible. Just inject hostname variable when run docker run command:
$ hostname
np-laptop
$ docker run -ti -e HOSTNAME=$(hostname) alpine:3.7
/ # env
HOSTNAME=np-laptop
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
Update:
I think you can do two things with docker run for your particular case:
1. Bind /etc/hosts file from the host to a container.
2. Define any dns server you want inside a container with --dns flag.
So, finally the command is:
docker run -ti -v /etc/hosts:/etc/hosts --dns=<IP_of_DNS> alpine:3.7
Docker containers by default has access to the host network, and they're able to resolve DNS names using DNS servers configured on the host, so it should work out of the box.
I remember having similar problem in my corporate network, I solved it by referencing in the app the remote server with FQDN - our-database.mycompany.com instad just using our-database.
Hope this helps.
People has asked similar questions and got good answers:
How do I pass environment variables to Docker containers?
Alternatively you can configure the DHCP/DNS server that serves the docker machines to resolve the hostnames properly. DDNS is another option that can simplify configuration as well.
This is a two-part question.
First part:
What is the best approach to run Consul and a Tomcat in the same docker container?
I've built my own image, installing both Tomcat and Consul correctly, but I am not sure on how to start them. I tried putting both calls as CMD in the Dockerfile, but no success. I tried to put Consul as an ENTRYPOINT (Dockerfile) and Tomcat to be called in the "docker run" command. It could be vice versa but I have a feeling that it is no good way either.
The docker will run in one AWS instance. Each docker container would run Consul as a server, to register themselves in another AWS instance. Consul and Consul-template will be integrated into proper load balance. This way, my HAproxy instance will be able to correctly forward the requests as I plug or unplug containers.
Second part:
In individual tests I did, the docker container was able to reach my main Consul server(leader) but it failed to register itself as an "alive" node.
Reading the logs at Consul server, I think is a matter of which ports I am exposing and publishing. In AWS, I already allowed communication in all ports in TCP and UDP between the instances in this particular Security Group.
Do you know which ports I should be exposing and publishing to allow proper communication between a standalone consul(aws instance) and consul servers (running inside docker containers inside a aws container)? What is command to run the docker container: docker run -p 8300:8300 .........
Thank you.
I would use ENTRYPOINT to kick off a script on docker run.
Something like
ENTRYPOINT myamazingbashscript.sh
Syntax might be off but u get the idea
The script should start both services and finally should tail -f the tomcat logs (or any logs).
tail -f will prevent container exit since the tail -f command never exits and it will also help u to see what tomcat is doing
Do ... docker logs -f to watch the logs after a docker run
Note because the container doesn't exit u can exec into it with ... docker exec -it containerName bash
This lets you have a sniff around inside the container
Not generally the best approach to have two services in one container because it destroys the separation of concerns and reusability but u may have valid reasons
To build use docker build then run with docker run as u stated it.
If u decided to go for a 2 container solution then u will need to expose ports between containers to allow them to talk to each other. You could share files between containers using volumes_from
I have a ubuntu machine which is a VM where I have installed docker in it. I am using this machine from my local windows machine and doing ssh , opening the terminal to the ubuntu machine.
Now , I am going to take a docker image which contains all the necessary softwares for eg: apache installed in it. Later I am going to deploy a sample appication(which is a web applicationP on to it and save it .
Now , I am in a confused mode as in how to check the deployed application if its running properly. i.e., what would be the address of the container which containds the deployed application.
for eg:- If I type http://127.x.x.x which is the address of the ubuntu machine , I am just getting time out .
Can anyone tell me how to verify the deployed application . Also, the printing the output of the program on the console works seemlessly fine , as the output gets printed , only thing I have some doubts is regarding the web application.
There are some possibilities to check whether your app is running.
Remote API
As JimiDini said, one possibility is the Docker remote API. You can use it to see all running containers (which would be your use case, right?), inspect a certain container or start and stop containers. The API is a REST-API with several binding for programming languages (at https://docs.docker.io/reference/api/remote_api_client_libraries/). Some of them are very outdated. To use the Docker remote API from another machine, I needed to open it explicitly:
docker -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d &
Note that the API is open to the world now! In a real scenario you would need to secure it in some way (e.g. see the example at http://java.dzone.com/articles/securing-docker%E2%80%99s-remote-api).
Docker PS
To see all running containers run docker ps on your host. This will list all running containers. If you do not see your app, it is not running. It also shows you the ports your app is exposing. You can also do this via the remote API.
Logs
You can also check the logs. You can run docker attach <container id> to attach to a certain container an see its stdout. You can run also run docker logs <container id> to receive the Docker logs. What I prefer is to write the logs to a certain directory, e.g. all logs to /var/log and mount this folder to my host machine. Then all your logs will end up in /home/ubuntu/docker-logs on your host.
docker run -p 80:8080 -v /home/ubuntu/docker-logs:/var/log:rw my/application
One word to ports and IP
Every container will get its own IP address. You can check this IP address via the remote API or via Docker on the host machine directly. You can also specify a certain host name for the container (by passing the --hostname="test42" to the run command). However, you mostly did not need that.
To access the application in the container, you need to open the port in the container and bind to a port on the host.
In your Dockerfile you need to EXPOSE the port your app runs on:
FROM ubuntu
...
EXPOSE 8080
CMD run-my-app.sh
When you start your container, you need to bind this port to a port of the host:
docker run -p 80:8080 my/application
Now you can access your app on http://localhost:80 or http://127.0.0.1:80.
If you app does not response, check if the container is running by typing docker ps or the remote API. If it is not running, check the logs for the reason.
(Note: If you run your Ubuntu VM in something like VirtualBox and you try to access it from your Windows machine, make sure you opened the ports in VirtualBox too!).
Docker container has a separate IP address. By default it is private (accessible only from the host-machine).
Docker provides all metadata (including IP address) via its API:
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#inspect-a-container
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#monitor-docker-s-events
You can also take a look at a little tool called docker-gen for inspiration. It monitors docker-events and created configuration-files on host machine using templates.
To obtain the ip address of a docker container, if you know its id (a long hex string) or if you named it:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container-id-or-name>
Docker is running its own network and to get information about it you can run the following commands:
docker network ls
docker network inspect <network name>
docker inspect <container id>
In the output, you should be able to find the IP.
But there is also a couple of things you need to be aware of, regarding Dockerfile and docker run command:
when you EXPOSE a port in Dockerfile, the service in the container is not accessible from outside Docker, but from inside other Docker containers
and when you EXPOSE and use docker run -p ... flag, the service in the container is accessible from anywhere, even outside Docker
So for example, if your apache is running on port 8080 you should expose it in Dockerfile and then you can run it as:
docker run -d -p 8080:8080 <image name> and you should be able to access it from your host on HTTP://localhost:8080.
It is an old question/answer but it might help somebody else ;)
working as of 2020
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id