Unable to access containerised version of Charles proxy web interface - docker

I'm trying to containerize Charles web proxy. I'm using the Charles Proxy image shown here -> https://hub.docker.com/r/kurron/docker-charles-proxy/tags . When I:
Fire up docker
Create a container from the above image by executing the following:
docker run -d -p 8890:8888 --name [CONTAINER_NAME]] kurron/docker-charles-proxy
Open a browser and try to access the Charles web proxy web interface within the container by entering "http://localhost.charlesproxy.com:8890/"
I get the following error:
Any idea what could be causing this? Whats strange is if i proxy a mobile phone to the machine that's running the docker container and type http://control.charles/ into the mobile phones browser i can access the web interface but not able to access it on the machine thats running the container.
I've also tried using the suggestion mentioned below but get same issue:
https://www.charlesproxy.com/documentation/faqs/localhost-traffic-doesnt-appear-in-charles/

The image you're trying to run hasn't been updated in four years. You might want to find something more supported (or build your own).
You can't just docker run that image. Charles is a GUI program, and needs to be able to bring up an interface on your local system, which requires access to your X windows server. If you read the documentation associated with the image, it says, among other things:
Launching The Image
docker-compose up will launch the image allowing you to begin working on projects. The Docker Compose file is configured to mount your home directory into the container.
Looking at the docker-compose.yaml file in the repository, we see:
version: '2'
services:
charles:
build: .
image: charles:compose
container_name: "charles"
network_mode: "host"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /tmp/.X11-unix:/tmp/.X11-unix
- /home/vagrant:/home/developer
stdin_open: true
tty: true
user: 1000:1000
environment:
DISPLAY: unix:0.0
There are some problems there, but the basic idea is:
Run the image using your UID
Mount your home directory into the container (note that any writable directory will do; it doesn't actually require access to your home directory)
Mount your X11 socket into the container
Set the DISPLAY environment variable correctly
I was able to run it like this:
docker run -u $UID --net=host \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $HOME/charles:/home/developer \
-e DISPLAY=unix$DISPLAY \
kurron/docker-charles-proxy
This will place files generated by Charles inside ~/charles/ (where you'll find .charles_config, a .charles directory, and maybe a .java directory).

Related

Docker volumes mounting on Windows 8 is not working

Context
I want to run a Docker Compose application on a Windows 8. I made it under a Ubuntu 16.04 and it's perfectly working on it.
This Docker Compose run:
nginx
php-fpm
The two containers use volumes.
Files
My .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/Users/my_user/Documents/Development/my_application
My docker-compose.yml file:
version: '2'
services:
web:
build: ../application-web/
ports:
- "80:80"
tty: true
# Add a volume to link php code on the host and inside the container
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
# Add hostnames to allow devs to call special url to open sites
extra_hosts:
- "localhost:127.0.0.1"
- "assistant.docker:127.0.0.1"
- "application.dev:127.0.0.1"
depends_on:
- custom-php
links:
- custom-php:custom-php
custom-php:
build: ../application-php/
ports:
- "50:50"
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
Problem
When I run docker-compose up, everything goes well. Containers start.
But when I try to reach http://192.168.99.100 in my web browser, I got a 403 error.
My investigations show that there is no mounted volumes in the nginx and the php containers:
docker exec -it compose_web_1 bash
ls -la /usr/share/nginx/html/assistant/
shows
drwxr.xr.x 2 root root 80 May 18 15:30 .
drwxr.xr.x 2 root root 4096 May 18 16:10 ..
It seems that Docker cannot mount volumes. Why?
Other information
I am using the Docker Toolbox: https://www.docker.com/products/docker-toolbox
I know that's the good IP address because when I try to reach it in my web browser, I see my nginx container displaying logs.
The environment variable APPLICATION_PATH set as //C:/Users/my_user/Documents/Development/my_application cannot work because Docker use the ":" character as separator for volume declaration:
ERROR: Volume //C:/Users/my_user/Documents/Development/my_application://C:/Users/my_user/Documents/Development/my_application has incorrect format, should be external:internal[:mode]
It's not a nginx problem because when I create an index.phtml file in the folder, I am able to run it:
<?php
echo 'Hello world!';
Ok, I finally did it!
TL;DR
Follow those instructions to be able to access C:\ inside your containers.
1. Install the Docker Toolbox
Go get it here: https://www.docker.com/products/docker-toolbox
Install it.
2. Run a Hello world
Open a Docker Quickstart Terminal.
Run in it:
docker run hello-world
3. Share C:\ with Docker
Open Virtualbox
Open configuration of the default virtual machine and go to shared folders
Modify or create a new shared folder by clicking on buttons to the right. Set options to:
C:\
C
Auto mount
Permanent configuration
Then validate.
4. Activate sharing
Shutdown the default virtual machine then restart it.
5. Set your paths
e.G. if you have a .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/path_from_C_to_the_folder_you_want_to_share_on_the_volume
/!\ you need to set COMPOSE_CONVERT_WINDOWS_PATHS to 1!
6. Start your Compose
In the Docker Quickstart Terminal:
Go to your Docker Compose folder, then start it:
cd /path_to_your_compose_folder
docker-compose up
Why have I to do that? It's so complicated!
The Docker technology rely on Linux namespaces. Without Linux, it can't work. To allow use of Docker on a Windows, Docker needs to install a Linux virtual machine. All the containers will run inside it.
The default virtual machine is now created and running within Virtualbox, that's why you have to share your folders using Virtualbox.
After sharing, the default virtual machine will have a mounted folder in it with a custom name (in the above example, it's C but it could be elephant or whatever).
Finally, Docker will mount volumes from the default virtual machine to the container: you have to use the name of the default machine shared folder in your volume declaration (in the above example, it's C but it could be elephant or whatever).

How can I Publish a jupyter tmpnb server?

I'm trying to publish a tmpnb server, but am stuck. Following the Quickstart at http://github.com/jupyter/tmpnb, I can run the server locally and access it at 172.17.0.1:8000.
However, I can't access the server remotely. I've tried adding -p 8000:8000 when I create the proxy container with the following command:
docker run -it -p 8000:8000 --net=host -d -e CONFIGPROXY_AUTH_TOKEN=$TOKEN --name=proxy jupyter/configurable-http-proxy --default-target http://127.0.0.1:9999
I tried to access the server by typing the machine's IP address:8000 but my browser still returns "This site can't be reached."
The logs for proxy are:
docker logs --details 45d836f98450
08:33:20.981 - info: [ConfigProxy] Proxying http://*:8000 to http://127.0.0.1:9999
08:33:20.988 - info: [ConfigProxy] Proxy API at http://localhost:8001/api/routes
To verify that I can access other servers run on the same machine I tried the following command: docker run -d -it --rm -p 8888:8888 jupyter/minimal-notebook and was able to accessed it remotely at the machine's ip address:8888.
What am I missing?
I'm working on an Ubuntu 16.04 machine with Docker 17.03.0-ce
Thanks
Create file named docker-compose.yml with content following, then you can launch the container with docker-compose up. Since images will be directly pulled errors will be arrested.
httpproxy:
image: jupyter/configurable-http-proxy
environment:
CONFIGPROXY_AUTH_TOKEN: 716238957362948752139417234
container_name: tmpnb-proxy
net: "host"
command: --default-target http://127.0.0.1:9999
ports:
- 8000:8000
tmpnb_orchestrate:
image: jupyter/tmpnb
net: "host"
container_name: tmpnb_orchestrate
environment:
CONFIGPROXY_AUTH_TOKEN: $TOKEN$
volumes:
- /var/run/docker.sock:/docker.sock
command: python orchestrate.py --command='jupyter notebook --no-browser --port {port} --ip=0.0.0.0 --NotebookApp.base_url=/{base_path} --NotebookApp.port_retries=0 --NotebookApp.token="" --NotebookApp.disable_check_xsrf=True'
A solution is available from the github.com/jupyter/tmpnb README.md file. At the end of the file under the heading "Development" three commands are listed:
git clone https://github.com/jupyter/tmpnb.git
cd tmpnb
make dev
These commands clone the tmpnb repository, cd into the tmpnb repository, and run the "dev" command from the the makefile contained in the tmpnb repository. On my machine, entering those commands created a notebook on a temporary server that I could access remotely. Beware that the "make dev" command deletes potentially conflicting docker containers as part of the launching process.
Some insight into how this works can be gained by looking inside the makefile. When the configurable-http-proxy image is run on Docker, both port 8000 and 8001 are published, and the tmpnb image is run with CONFIGPROXY_ENDPOINT=http://proxy:8001

How to access OrientDB bin scripts installed in Docker

I have installed OrientDB inside Docker.I want to run scripts inside the bin folder .But I am not able to find any way to access the directory of OrientDB like a normal explorer. Is there any way I can use the Docker installation like a local installation so that I can see and interact with all the folders of OrientDB installation.Thanks
If you want to access them inside docker container, you can do this:
start the container, then docker exec -i -t CONTAINER_NAME bash or docker exec -i -t CONTAINER_NAME /bin/sh. If bash/sh is installed in this particular image, you will get to the shell and can what you want there.
Another way, and I think it's what you want is to use docker volumes. You map your host path to a container path, so it sees whatever changes you do outside.
map some folder on your host system to the location orientdb expects and it will create files there.
mapping excerpt from docker-compose.yml for mysql:
alldbhost:
ports:
- "3306:3306"
image: percona:5.5
volumes:
- ./etc/timezone:/etc/timezone
- /dev/shm/mysql/:/var/lib/mysql
- ./etc/mysql/:/etc/mysql
- /home/user/temp/mysql_replication:/local/mysql/binlog
environment:
TERM: xterm
actually, orientdb manual provides these instructions:
docker run --name orientdb -d -v <config_path>:/opt/orientdb/config -v <databases_path>:/opt/orientdb/databases -v <backup_path>:/opt/orientdb/backup -p 2424 -p 2480 nesrait/orientdb-2.0
-v <databases_path>:/opt/orientdb/databases is a path on your host system where database files will be located
If you installed orientdb inside some container (ubuntu, for example) you should find orientdb config files, find where it stores databases and, again, map your host directory to container's

How to link multiple Docker containers and encapsulate the result?

I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.

Development workflow for server and client using Docker Compose?

I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose

Resources