Connecting IBM Containers (Dockers) to Watson IoT service instance - docker

I wonder if I can connect my Dockers running in IBM Containers service with a Watson IoT service instance (of course, running in the same organization and space).
I can always assign a public IP to my Docker and connect through the public IP but I think that makes no sense and there is an alternative like I do with other services using something like
-e "CCS_BIND_SRV=My-IoT-Service"
when starting the Docker.

basically you can directly connect to IBM Watson IoT from your docker container. All you need to know are a couple of credentials. You can either obtain those by reading the VCAP_SERVICES JSON property which can be injected into your container:
Here is a link explaining this. (Search for VCAP_SERVICES)
What you also can do is just obtain the credentials from the Bluemix UI and use them accordingly.
Here a python example how to do this
Finally, I can recommend this course since it explains all on connectivity in detail

Related

Multi-Platform Docker Internal Network Connect From Host

I have a unique Docker issue. I am developing an application which needs to connect to multiple Docker containers. The gist is, that this application will use the Docker SDK to spin up containers and connect to them as needed.
However, due to the nature of the application, we should assume that each one of these containers is compromised and unsafe. Therefore, I need to separate them from the host network (so they cannot access my devices and the WAN). I still have the constraint of needing to connect to them from my application.
It is a well-known problem that the macOS networking stack doesn't support connecting to a docker network. Normally, I'd get around this by exposing a port I need. However, this is not possible with my application, as I am using internal networks with Docker.
I'd like to accomplish something like the following. Imagine Container 2 and Container 3 are on their own private internal network. The host (which isn't a container) is controlling the Docker SDK and can query their internal IPs. Thus, it can easily connect to these machines without this network being exposed to the network of the host. Fortunately, this sort of setup works on Linux. However, I'd like to come up with a cross platform solution that works on macOS.
I had a similar situation. What I ended up doing was:
The app manages a dynamic container-to-port mapping (just a hash table).
When my app (on the host) wants to launch a container, it finds an unused port in a pre-defined range (e.g. 28000-29000).
Once it has a port, it maps the container's port to some port in a pre-determined range (e.g. -p 28003:80).
When my app needs to refer to a container, it uses localhost:<port> (e.g. localhost:28001).
It turns out to not be a lot of code, but if you go that route, make sure you encapsulate the way you refer to containers (i.e. don't hard-code the hostname and port, use a class that generates the string).
All that said, you should really do some testing with a VM deployment option before you rule it out as too slow.

Force docker container to use host machine MAC address

I am providing a docker container for my software that would run directly on user machine. The software is supposed to use Node locked license which would be bound to the MAC address of the host machine. FlexLM is used to validate the license.
The problem is that the docker container does not by default accesses the host machine's MAC address. One has to either bind the docker with host machine network using the --net argument or provide the MAC address explicitly using the --mac-address argument.
The problem is that one can pass any argument in --mac-address argument and the docker container will use that MAC address. This defeats the whole purpose of Node locked license. How do I make sure that the docker always gets the host machine's MAC address?
Short Answer:"there is currently no good solution for nodelocking within a container. Everything is virtualized so there is nothing safe to bind to."
Suggestion: Have you hear about Flexera's REST-based licensing API? Also know as the Cloud Monetization API or CMAPI.
This API was designed for cloud to cloud license checking. It does not require the SDK libraries, you can call it from any language that can make a REST call. It makes for a super light weight client, but requires back end functionality (FlexNet Operations and Cloud Licensing Service) to support it.
It's a great solution for applications deployed in a docker container.
Take a look at the FlexNet Licensing datasheet here:
https://www.flexerasoftware.com/resources.html?type=datasheet
Then contact your account manager for more information.
Source - Flexera Customer Community - https://community.flexera.com/t5/FlexNet-Publisher-Forum/Support-for-Docker-and-Kubernetes/m-p/111022

How to containerize database dependent services?

Example: I got a microservice 'Alpha', which usually connects to 'http://localhost:3306/dbforalpha'. The service depends on that database. Now I want to containerize both, the database and the service. Of course the address of the database is changing, so that I can not even build an image for service 'Alpha'.
Now I am wondering how to deal with that problem? There must be a easier way than waiting until the database container is running to check it's ip:port. Do tools like kubernetes solve this issue?
Docker comes with a service discovery mechanism (this is the basic term for how services know how to talk to each other), containers can be linked together, and you can use DNS to talk to them.
For example, your alpha service could be linked to your database, and connect to db:3306, and Docker would set the necessary /etc/hosts entries in alpha, so it could resolve db to an IP.

Using RabbitMQ in for communication between different Docker container

I want to communicate between 2 apps stored in different docker containers, both part of the same docker network. I'll be using a message queue for this ( RabbitMQ )
Should I make a 3rd Docker container that will run as my RabbitMQ server, and then just make a channel on it for those 2 specific containers ? So that later on I can make more channels if I need for example a 3rd app that needs to communicate with the other 2?
Regards!
Yes, it is the best way to utilize containers, and it will allow you to scale, also you can use the official RabbitMQ container and concentrate on your application.
If you started using containers, than it's the right way to go. But if you your app is deployed in cloud (AWS, Azure and so on) it's better to use cloud queue service which is already configured, is updated automatically, has monitoring and so on.
I'd like also to point out that docker containers it's only a way to deploy your application components. Application shouldn't take care about how your components (services, dbs, queues and so on) are deployed. For app service a message queue is simply a service located somewhere, accessible by connection parameters.

Not able to access to Docker container through bound public ip

I am trying to use docker containers on Bluemix but it looks like I am having troubles tried again this morning but seems it still does not work.
I have followed these steps:
I have released all public ip issuing the cf ic ip release command
I have created a new container from the etherpad image (following the tutorial Tutorial), requesting and binding a new public ip from the Bluemix GUI.
Bluemix assigned 134.168.1.49 IP and bound it to the container.
I expect the application to respond to http://134.168.1.49:9080/ but it hangs and responds me back with a connection timeout.
Running a container from the same image locally works perfectly.
Any idea, suggestion?
There is a known issue with the IBM Containers service where there's a delay with the inbound network access being available after containers start. It can take up to five minutes for this to be available.
Are you able to successfully ping the bound IP address?
Note: The IBM Containers service suffered a major incident yesterday which affected operations. If you were trying to use it during this time, it may be related to that.
We recently experienced some connectivity issues in our US-South datacenter. I would suggest redeploying your container with an IP address again today and determine if you have further success.
I have worked with Bluemix support, that was able to create a new image, start it up and access it successfully with my exact configuration. At this time, it appears there is something wrong with the networking for the tenant space where my containers are running. Bluemix team is investigating.
Thank you all for the support.

Resources