I'm trying to publish a tmpnb server, but am stuck. Following the Quickstart at http://github.com/jupyter/tmpnb, I can run the server locally and access it at 172.17.0.1:8000.
However, I can't access the server remotely. I've tried adding -p 8000:8000 when I create the proxy container with the following command:
docker run -it -p 8000:8000 --net=host -d -e CONFIGPROXY_AUTH_TOKEN=$TOKEN --name=proxy jupyter/configurable-http-proxy --default-target http://127.0.0.1:9999
I tried to access the server by typing the machine's IP address:8000 but my browser still returns "This site can't be reached."
The logs for proxy are:
docker logs --details 45d836f98450
08:33:20.981 - info: [ConfigProxy] Proxying http://*:8000 to http://127.0.0.1:9999
08:33:20.988 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
To verify that I can access other servers run on the same machine I tried the following command: docker run -d -it --rm -p 8888:8888 jupyter/minimal-notebook and was able to accessed it remotely at the machine's ip address:8888.
What am I missing?
I'm working on an Ubuntu 16.04 machine with Docker 17.03.0-ce
Thanks
Create file named docker-compose.yml with content following, then you can launch the container with docker-compose up. Since images will be directly pulled errors will be arrested.
httpproxy:
image: jupyter/configurable-http-proxy
environment:
CONFIGPROXY_AUTH_TOKEN: 716238957362948752139417234
container_name: tmpnb-proxy
net: "host"
command: --default-target http://127.0.0.1:9999
ports:
- 8000:8000
tmpnb_orchestrate:
image: jupyter/tmpnb
net: "host"
container_name: tmpnb_orchestrate
environment:
CONFIGPROXY_AUTH_TOKEN: $TOKEN$
volumes:
- /var/run/docker.sock:/docker.sock
command: python orchestrate.py --command='jupyter notebook --no-browser --port {port} --ip=0.0.0.0 --NotebookApp.base_url=/{base_path} --NotebookApp.port_retries=0 --NotebookApp.token="" --NotebookApp.disable_check_xsrf=True'
A solution is available from the github.com/jupyter/tmpnb README.md file. At the end of the file under the heading "Development" three commands are listed:
git clone https://github.com/jupyter/tmpnb.git
cd tmpnb
make dev
These commands clone the tmpnb repository, cd into the tmpnb repository, and run the "dev" command from the the makefile contained in the tmpnb repository. On my machine, entering those commands created a notebook on a temporary server that I could access remotely. Beware that the "make dev" command deletes potentially conflicting docker containers as part of the launching process.
Some insight into how this works can be gained by looking inside the makefile. When the configurable-http-proxy image is run on Docker, both port 8000 and 8001 are published, and the tmpnb image is run with CONFIGPROXY_ENDPOINT=http://proxy:8001
Related
I am running Cypress version 10.9 from inside Docker in a Mac OS. I set my base URL as localhost:80. As a simple example, I am running an Apache server on localhost:80 which if I go to a web browser, I get the 'It works!' page, so it is indeed up. I also can ping localhost:80 from the same terminal I am executing my Docker Cypress container.
But I get this error every time when attempting to run my Cypress container:
Cypress could not verify that this server is running:
> http://localhost
We are verifying this server because it has been configured as your baseUrl.
I do see there are some stackoverflow posts(ie, [https://stackoverflow.com/questions/53959995/cypress-could-not-verify-that-the-server-set-as-your-baseurl-is-running][1]) that talk about this error. However, the application under test in these posts are inside another Docker container. The Apache page is not under a container.
This is my docker-compose.yml:
version: '3'
services:
# Docker entry point for the whole repo
e2e:
build:
context: .
dockerfile: Dockerfile
environment:
CYPRESS_BASE_URL: $CYPRESS_BASE_URL
CYPRESS_USERNAME: $CYPRESS_USERNAME
CYPRESS_PASSWORD: $CYPRESS_PASSWORD
volumes:
- ./:/e2e
I pass 'http://localhost' from my environment CYPRESS_BASE_URL setting.
This is the docker command I use to build my image:
docker compose up --build
And then to run the Cypress container:
docker compose run --rm e2e cypress run
Some other posts suggest running the docker run command with --network to make sure my Cypress container runs on the same network as the compose network(ref: Why Cypress is unable to determine if server is running?) but I am executing 'docker compose run' which does not have a --network argument.
I also verified that my /etc/hosts has an entry of 127.0.0.1 localhost as other posts have suggested. Any suggestions? Thanks.
In a docker-compose.yml file I have defined the following service:
php:
container_name: php
build:
context: ./container/php
dockerfile: Dockerfile
networks:
- saasnet
volumes:
- ./services:/var/www/html
- ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
It all builds fine, and another service (nginx) using the same volume mapping, - ./services:/var/www/html finds php as expected, so it all works in the browser. So far, so good.
But now I want to go into the container because I want to run composer install from a certain directory inside the container. So I go into the container using:
docker run -it php bash
And I find myself in the container at /var/www/html, where I expect to be able to navigate as if I were on my host machine in ./services directory, but ls at this point inside the container shows no files at all.
What am I missing or not understanding about how this works?
Your problem is that your are not specifying the volume on your run command - docker run is not aware of your docker-compose.yml. If you want to run it with all your options as specifiend in it, you need to either use docker-compose run, or pass all options to docker run:
docker-compose run php bash
docker run -it -e B_PORT=3306 -e DB_HOST=database -v ./services:/var/www/html -v ./logs/php:/usr/local/etc/php-fpm.d/zz-log.conf php bash
I'm starting with Docker and Laravel. I've cloned the Laradock images from GitHub. The services are running fine with this command: (from into the path of Laradock)
#docker-compose up -d apache2 gitlab
The problem is in the startop of the OS. The containers doesn't running.
I've read from the ofical documentation of Docker and there is the command:
#docker run -dit --restart unless-stopped laravel_apache2
#docker run -dit --restart unless-stopped laravel_gitlab
I'm not sure why when I've restarted the machine the services are running(docker ps) but I haven't access to the server by apache2 and Gitlab.
If it is execute again the first commmand from the path:
#docker-compose up -d apache2 gitlab
It's working fine again.
I'm sure that the problem is between docker and docker-compose I don't know how to put in the startup the containers running by docker-compose command.
Should be I have build a container and move or config by different way :(
Please Could you help me to put in the startup a containers running by docker-compose?
Thanks!
The best way to make this behavior permanent, is to modify the docker-compose.yml, by adding the following line to each service that you need the OS to restart upon startup: restart: unless-stopped
Once you saved the modified docker-compose.yml file, you'll need to restart your services, example:
docker-compose down
docker-compose up -d apache2 gitlab
I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose
when I am at work, with Ubuntu 14.04 (IP: a.b.c.d) and I want to execute a program (p.e. firefox) in a docker container and get the graphic output, I start a shell in the docker container and in this shell I execute:
DISPLAY=a.b.c.d:0 firefox
On the other hand, when I am at home and I need to run a program in the work-pc and get the output in the home-pc with private IP address (NATed), I connect with:
$ ssh -X work-pc
then I run the program in that shell and get the output locally.
Is there a way of redirecting the output of the docker container to home thru the "ssh -X" tunnel?
I know I could install an ssh server in the container, redirect a port in the work-pc to port 22 of the container, redirect a home-pc local port to that work-pc port (using ssh -L port:host:port work-pc) and connect from home-pc to the container with "ssh -X" to get the output at home, but I wonder if there is other way.
Thanks.
I got something to work following the instructions at https://dzone.com/articles/docker-x11-client-via-ssh.
My docker-compose has:
version: "3.7"
services:
rhel:
privileged: true
build:
context: /home/mpawlowsky/docker
dockerfile: Dockerfile
volumes:
- /tmp/.x11-unix:/tmp/.x11-unix
- /home/mpawlowsky/.Xauthority:/root/.Xauthority:rw
cap_add:
- NET_ADMIN
- NET_RAW
environment:
- DISPLAY
network_mode: host
I start the container and run in it:
$ docker-compose up -d
$ docker exec -it rhel /bin/bash
$ firefox