I found out there is hyperledger/composer-playground as a docker image. It's easily startable using
docker run --name composer-playground --publish 8080:8080 --detach hyperledger/composer-playground
Now I want to make a Dockerfile out of it that can serve an existing Business Network Definition as demo application. It should be embedded, so no real Fabric network is required. What possibilities do I have to accomplish that?
First idea: Card file structures could be copied into /home/composer/.composer/cards but as far as I understand, these cards could only have the embedded connection type, otherwise a real Fabric network is required.
Second idea: Is there some API endpoint that could be queried to create an embedded network for a .bna file?
Interesting idea, and with the direction of Composer playground cropping up a bit recently, it would be a good one to discuss on a Composer community call
As for how things are now, I think you'll have to set everything up with a real Fabric. I haven't seen a Dockerfile that does that but seems doable. The hosted playground does everything in local storage and pouch DB (indexedDB) so I don't think you would be able to get a demo bna in there without changes to the playground.
One thing that I had pondered in the past was making it possible to configure where the playground looks for sample networks, and that could even include the primary 'get started' network.
Might that help in this case? Could be worth opening a Github issue to explore the use cases if that does sound useful (pull requests gratefully accepted!)
Related
I'm trying to learn Dapr and Docker Compose at the same time, though I am running into some problems. I have a very basic docker-compose.yaml, shown below
version:"3.7"
services:
python-service:
image: python-image
java-service:
image: java-image
My goal is to make these be able to communicate over Dapr (currently they are simple hello world programs, but I'm trying to get the connection working first.)
My goal architecture would be something like:
[python-service][Dapr-sidecar]
[java-service][Dapr-sidecar]
Having the services talk to the sidecar, and the sidecars talk to eachother over a network. I'm quite stumped on how to achieve this, and I can't seem to find any guides online to fit my exact case.
I tried to follow the guide here: https://stackoverflow.com/a/66402611/17494949, However it game me a bunch of errors, seemed to use some other system, and didn't explain how to run it. Any help would be appreciated.
I have a web app (netcore) running in a docker container. If I update it under load it won't be able to handle requests until there is a gap. This might be a bug in my app, or in the .net, I am looking for a workaround for now. If I hit the app with a single http request before exposing it to the traffic though, it works as expected.
I would like to get this behaviour:
In the running server get the latest release of the container.
Launch the container detached from network.
Run a health check on it, if health check fails - stop.
Remove old container.
Attach new container and start processing traffic.
I am using compose atm, and have somewhat limited knowledge of docker infrastructure and the problem should be something well understood, yet I've failed finding anything in the google on the topic.
It kind of sounds like Kubernetees at this stage, but I would like to keep it as simple as possible.
The thing I was looking for is the Blue/Green deployment and it is quite easy to search for it.
E.g.
https://github.com/Sinkler/docker-nginx-blue-green
https://coderbook.com/#marcus/how-to-do-zero-downtime-deployments-of-docker-containers/
Swarm has a feature which could be useful as well: https://docs.docker.com/engine/reference/commandline/service_update/
Google Cloud Run is new. Is it possible to run WordPress docker on it? Perhaps using gce as database for the mysql/mariadb. Can't find any discussion on this
Although I think this is possible, it's not a good use of your time to go through this exercise. Cloud Run might not be the right tool for the job.
UPDATE someone blogged a tutorial about this (use at your own risk): https://medium.com/acadevmy/how-to-install-a-wordpress-site-on-google-cloud-run-828bdc0d0e96
Here are a few points to consider;
(UPDATE: this is not true anymore) Currently Cloud Run doesn't support natively connecting to Cloud SQL (mysql). There's been some hacks like spinning up a cloudsql_proxy inside the container: How to securely connect to Cloud SQL from Cloud Run? which could work OK.
You need to prepare your wp-config.php beforehand and bake it into your container image. Since your container will be wiped away every now and then, you should install your blog (creates a wp-config.php) and bake the resulting file into the container image, so that when the container restarts, it doesn't lose your wp-config.php.
Persistent storage might be a problem: Similar to point #2, restarting a container will delete the files saved to the container after it started. You need to make sure stuff like installed plugins, image uploads etc SHOULD NOT write to the local filesystem of the container. (I'm not sure if wordpress lets you write such files to other places like GCS/S3 buckets.) To achieve this, you'd probably end up using something like the https://wordpress.org/plugins/wp-stateless/ plugin or gcs-media-plugin.
Any file written to local filesystem of a Cloud Run container also count towards your container's available memory, so your application may run out of memory if you keep writing files to it.
Long story short, if you can make sure your WP installation doesn't write/modify files on your local disk, it should be working fine.
I think Cloud Run might be the wrong tool for the job here since it runs "stateless" containers, and it's pretty damn hard to make WordPress stateless, especially if you're installing themes/plugins, configuring things etc. Not to mention, your Cloud SQL server won't be "serverless", and you'll be paying for it while it's not getting any requests as well.
(P.S. This would be a good exercise to try out and write a blog post about! If you do that, add it to the awesome-cloudrun repo.)
I am new to Docker. Using Kitematic, how can I setup a Docker container containing the following?
Apache, Memcached, MySQL, Nginx, PHP FPM
Should I find one single image with all these? If so, how do I find that on https://hub.docker.com? It doesn't seem possible to filter by above requirements.
Or should I install these as separate containers?
Bart,
I don't know anything about kitematic but I can give you some general information though to clear things up.
General concensus is to run only a single process per container. There are lot's of discussions and information around why this would be good or bad, one such discussion for example: https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container.
That said, these are the images I would choose for an environment with the software you described above:
Memcache: https://hub.docker.com/_/memcached
MySql: https://hub.docker.com/_/mysql
Nginx: https://hub.docker.com/_/nginx
PHP FPM: https://hub.docker.com/_/php
How do I get these images? I go to hub.docker.com and search for the software I want, I then start with the official images and see if they suite my needs. If they do, great! Otherwise, I would look for non-official images and eventually if I don't find what I want I will extend the existing images by creating a custom image, based on one from hub.docker.com
Some more explanation about the last one, PHP. PHP is distributed with multiple tags. By going to the docker hub page ('description'-tab) you can see the supported tags. Clicking the tag you are interested in will lead you to a github repo where the Dockerfile is hosted. This file contains the commands, used to construct the image you are researching. You can check all the tags to see which one installs the software you need. For example, there are PHP tags where apache is installed (i.e. 7-apache) and there are tags where FPM is installed (i.e. 7-fpm).
Hope this will help you with the research about what images to use!
You need to run those images within the same docker network, tough a docker-compose (and is associated docker-compose.yml) such as this one.
The docker-compose support in Kinematic UI though... is still an open issue.
you cant find all of these containers as one image.. all you can do is create a docker-compose file and add all those independent images into the compose file.
This way you can handle all your containers as a service in a single with there dependencies too..
For further info refer to https://docs.docker.com/compose/
New question:
I've followed the guestbook tutorial here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/guestbook/README.md
And the output of my commands match their outputs exactly. When I try to access the guestbook web server, the page does not load.
Specifically, I have the frontend on port 80, I have enabled http/s connections on the console for all instances, I have run the command:
gcloud compute firewall-rules create --allow=tcp:<PortNumberHere> --target-tags=TagNameHere TagNameHere-<PortNumberHere>
and also
cluster/kubectl.sh get services guestbook -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}'
But when I run curl -v http://:, the connection simply times out.
What am I missing?
Old Question - Ignore:
Edit: Specifically, I have 3 separate docker images. How can I tell kubernetes to run these three images?
I have 3 docker images, each of which use each other to perform their tasks. One is influxdb, the other is a web app, and the third is an engine that does data processing.
I have managed to get them working locally on my machine with docker-compose, and now I want to deploy them on googles compute engine so that I can access it over the web. I also want to be able to scale the software. I am completely, 100% new to cloud computing, and have never used gce before.
I have looked at Kubernetes, and followed the docs, but I cannot get it to work on a gce instance. What am I missing/not understanding? I have searched and read all the docs I could find, but I still don't feel any closer to getting it than before.
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/gce.md
To get best results on SO you need to ask specific questions.
But, to answer a general question with a general answer, Google's Cloud Platform Kubernetes wrapper is Container Engine. I suggest you run through the Container Engine tutorials, paying careful attention to the configuration files, before you attempt to implement your own solution.
See the guestbook to get started: https://cloud.google.com/container-engine/docs/tutorials/guestbook
To echo what rdc said, you should definitely go through the tutorial, which will help you understand the system better. But the short answer to your question is that you want to create a ReplicationController and specify the containers' information in the pod template.