I have my Tyk components (Tyk Pro Demo from GitHub) running using Docker compose. When I create a simple API using some public APIs, like Pet Store Io, it works fine.
Now I have word press application running using Docker compose, and the Docker compose file is available here (https://docs.docker.com/samples/wordpress/). This application is running on http://localhost:8000.
However, when I pass this 'localhost:8000' to the target URL in Tyk API definition, and call it through Tyk, it show 'There was a problem proxying the request'.
Is there any setting/ method which can solve this problem?
Actually when I referred to Docker documentation, I realized that I just need to put them in the same Docker network and it will be done.
Related
I'm trying to pull an image from a server with multiple proxies.
Setting a proper proxy depends on which zone the machine is trying to docker pull from.
For the record, adding the one relevant proxy in /etc/systemd/system/docker.service.conf/http-proxy.conf of the machine which is pulling the image, works fine.
But the container is supposed to be downloaded on multiple zones, which require different proxies based on where the machine is.
I tried two things:
Passed the list of proxies in the http-proxy.conf, like this:
[Service]
Environment="HTTP_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="HTTPS_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="NO_PROXY=localhost"
Some machines require http://proxy_1:port/, which work fine.
But on a machine that requires http://proxy_2:port/ to pull; it does not work, meaning, Docker does not fallback to another proxy to try. It returns this error:
Error response from daemon: Get HTTP:<ip>:<proxy_1> proxyconnect tcp: dial tcp <ip>:<proxy_1>: connect: no route to host
Ofcourse if I were to provide only the second working proxy to the configuration, it will work.
Passing proxy as a parameter to docker pull, like in docker build/run but that is not supported as per the documentation.
I am looking for a way to set-up proxies in such a way that either
Docker falls back to trying other provided alternate proxies
OR
I can provide proxy dynamically at the time of pull. (This will be part of an automated process which determines relevant proxy to pass.)
I do not want to constantly change the http-proxy file and restart docker for obvious reasons.
What are my options?
If you're using a sufficiently recent docker (i.e. 17.07 and higher) you can have this configuration on the client side. Refer to the official documentation for details on the configuration.
You still need to have multiple configuration files for the various proxy configuration you need, but you can switch them without the need to restart the docker daemon.
In order to do something similar (not exactly related to proxy) I use a shell script that wraps the invocation of docker client pointing to a custom configuration file via the --config option.
I created a very simple docker practice script (Github link), and executed it via the docker application on my MAC OS computer without any problems. I wanted to test it on google clouds compute engine, so i created an instance and re-built the docker image & container via the SSH browser (Using Debian GNU/Linux)
Everything seems to work fine, except when i try to access the container via localhost/external IP. Both give me this response Site can't be reached.
I've adjusted the firewall settings many times, and end up with the same results as the screenshot provided. I ended up resetting the firewall settings to its default settings, just so I could bring this question here. Here are the default settings
What makes me think i'm missing something is the fact that I can use curl http://localhost:5000 (the port i've chosen for exposure), and i'll get this as a response, which is all i had set the page to say once it's launched.
What am I missing that's causing the container to not allow me to view it via localhost/external IP?
I have an app that creates docker containers using the docker remote api, which is done using this library.
So far it is working fine with simple configuration options for the container creation. Now I need to create the container with much more config options, so wondering if i can use a docker-compose file. This api is created based on v1.23 of docker remote api spec, does docker remote api support creating a container using a compose file?
I cannot find an option from this documentation. but wondering if i am looking in wrong place.
No; Docker Compose itself is an application that uses the API. You’d need to directly run docker-compose up or something similar as a shell command if you wanted to directly use it.
(You might be able to hack into its internals if you have a Python program, but not from Java.)
I'm trying to build a simple in-browser shell using Docker and xterm.js. I've correctly hooked up the frontend using xterm.js's attach addon.
How does one connect to Docker via websockets?
If you are using a Docker API >= 1.28, you cannot connect straight to it with xterm.js, since Docker changed their WebSocket protocol from text to binary 😞.
There is an open xterm.js issue for that: https://github.com/xtermjs/xterm.js/issues/883.
Is there a way to add API endpoints in Kong without using curl? I have Kong up and running in a docker container using docker-compose and I would like to be able to pass in a configuration file (or what-have-you) on container spin up that outlines the endpoints I would like setup. Is this possible? This is the closest I have found to a solution : http://blog.toast38coza.me/kong-up-and-running-part-2-defining-our-api-gateway-with-ansible/
One option could be to use the YAML driven Kongfig tool to manage the config of the machine. You could run it external to the container e.g. via a CI process (Jenkins etc.) or in theory add a bootstrap action with Konfig running locally within the container.
You can use Kongfig as Mark said or throught the GUI Konga