Created Assets are not persisted between Fabric server reboots - hyperledger

I followed the Hyperledger-composer tutorial
to install all components and everything went well.
I was able to create assets and participants and can interact with them via REST API Services.
However, after I rebooted the Fabric network all the assets and participants were gone and I have to recreate them.
Did I miss some settings in docker-composer.yaml or others for data persistence?
I did follow the instruction on page 16 about the "a Note on Data Persistence" to mount a dir into the container.

If you shutdown all Fabric nodes then you will lose all data. AFAIK Fabric does not yet have tools to replay the transaction logs from disk on restart of the entire Fabric. You should use Fabric support channels to confirm this however.

Related

ERR_CONNECTION_REFUSED when trying to view software running in docker containers via url

I'm new to Docker, and have been reconstructing a test environment with instructions that were left to me by the previous developer.
We have several in-house pieces of job tracking software that both the outside clients and internal employees can access.
The job tracking software has a few dependencies that require a mailhog container to be spun up, and a second container which contains all of the other required sub software.
Despite my amateur knowledge on the subject, and the complexity of the software, I successfully installed Docker with the required Linux Kernels and Ubuntu.
I pulled the required images I needed, and successfully built the other ones.
I configured the in-house software with all of the testing settings enabled, and have it all grouped in the proper file directory.
By all intents and purposes, it should be configured fine.
The trouble starts with starting the containers.
The command line prompt "docker-compose up -d" pointed in the proper sub-directory throws this error:
Error response from daemon: Get "https://registry-1.docker.io/v2/": remote error: tls: handshake failure
I figured screw it, I'll build it right from the Docker Desktop instead of using command line, and it worked! The two containers I needed successfully built.
I even ran the docker test command to make sure they were really running.
So here's where the trouble just gets worse.
The ex-developer states the jobtracking software can then be viewed in browser using this "https://localhost/jobtracker/"
The page doesn't load, and then throws err_connection_refused error.
I am at my wits end with trouble shooting, because I quite simply don't have the networking or development knowledge to search the right things, AND to make matters even more frustrating is the remote outsourced devs have their environment up and running, despite using the exact same instructions I have.
So I'm down to two possible issues. Either somewhere in the set-up process, I messed something up, OR my office network security is breaking something. However, I was told by our lead IT specialist, I should have all of the same network permissions the outsourced guys on the vpns have.
I am here now reaching out on several docker web sources to find a solution.
I am on Windows 10.
Thankfully my employers are pretty cool about me learning as I go, and understand the technical difficulties. I just don't want to squander the opportunity, and would like to make some progress.

HTTP 503 errors from Cloud Run app in one GCP projects but not the other

The issue
I am using the same container (similar resources) on 2 projects -- production and staging. Both have custom domains setup with cloud flare DNS and are on the same region. Container build is done in a completely different project and IAM is used to handle the access to these containers. Both project services have 80 concurrency and 300 seconds time out for all 5 services.
All was working good 3 days back but from yesterday almost all cloud run services on staging (thankfully) started throwing 503 randomly and for most requests. Some services were not even deployed for a week. The same containers are running fine on production project, no issues.
Ruled out causes
anything to do with Cloudflare (I tried the URL cloud run gives it has the issue of 503)
anything with build or containers (I tried the demo hello world container with go - it has the issue too)
Resources: I tried giving it 1 GB ram and 2 cpus but the problem persisted
issues on deployment (deploy multiple branches - didn't work)
issue in code (just routed traffic to old 2-3 days old revision but still issue was there)
Issue on service level ( I used the same container to create a completely new service, it also had the issue)
Possible causes
something on cloud run or cloud run load balancer
may some env vars but that also doesn't seem to be the issue
Response Codes
I just ran a quick check with vegeta (30 secs with 10 rps) same container on staging and production for a static file path and below are the responses:
Staging
Production
If anyone has any insights on this it would help greatly.
Based on your explanation, I cannot understand what's going on. You explained what doesn't work but didn't point out what works (does your app run locally? are you able to run a hello world sample application?)
So I'll recommend some debugging tips.
If you're getting a HTTP 5xx status code, first, check your application's logs. Is it printing ANY logs? Is there logs of a request? Does your application have and deployed with "verbose" logging setting?
Try hitting your *.run.app domain directly. If it's not working, then it's not a domain or dns or cloudflare issue. Try debugging and/or redeploying your app. Deploy something that works first. If *.run.app domain works, then the issue is not in Cloud Run.
Make sure you aren't using Cloudflare in proxy mode (e.g. your DNS points to Cloud Run; not Cloudflare) as there's a known issue about certificate issuance/renewals when domains are behind Cloudflare, right now.
Beyond these, if a redeploy seems to solve your problem, maybe try redeploying. It could be very likely some configuration recently became different two different projects.
See Cloud Run Troubleshooting
https://cloud.google.com/run/docs/troubleshooting
Do you see 503 errors under high load?
The Cloud Run (fully managed) load balancer strives to distribute incoming requests over the necessary amount of container instances. However, if your container instances are using a lot of CPU to process requests, the container instances will not be able to process all of the requests, and some requests will be returned with a 503 error code.
To mitigate this, try lowering the concurrency. Start from concurrency = 1 and gradually increase it to find an acceptable value. Refer to Setting concurrency for more details.

How to run WordPress on Google Cloud Run?

Google Cloud Run is new. Is it possible to run WordPress docker on it? Perhaps using gce as database for the mysql/mariadb. Can't find any discussion on this
Although I think this is possible, it's not a good use of your time to go through this exercise. Cloud Run might not be the right tool for the job.
UPDATE someone blogged a tutorial about this (use at your own risk): https://medium.com/acadevmy/how-to-install-a-wordpress-site-on-google-cloud-run-828bdc0d0e96
Here are a few points to consider;
(UPDATE: this is not true anymore) Currently Cloud Run doesn't support natively connecting to Cloud SQL (mysql). There's been some hacks like spinning up a cloudsql_proxy inside the container: How to securely connect to Cloud SQL from Cloud Run? which could work OK.
You need to prepare your wp-config.php beforehand and bake it into your container image. Since your container will be wiped away every now and then, you should install your blog (creates a wp-config.php) and bake the resulting file into the container image, so that when the container restarts, it doesn't lose your wp-config.php.
Persistent storage might be a problem: Similar to point #2, restarting a container will delete the files saved to the container after it started. You need to make sure stuff like installed plugins, image uploads etc SHOULD NOT write to the local filesystem of the container. (I'm not sure if wordpress lets you write such files to other places like GCS/S3 buckets.) To achieve this, you'd probably end up using something like the https://wordpress.org/plugins/wp-stateless/ plugin or gcs-media-plugin.
Any file written to local filesystem of a Cloud Run container also count towards your container's available memory, so your application may run out of memory if you keep writing files to it.
Long story short, if you can make sure your WP installation doesn't write/modify files on your local disk, it should be working fine.
I think Cloud Run might be the wrong tool for the job here since it runs "stateless" containers, and it's pretty damn hard to make WordPress stateless, especially if you're installing themes/plugins, configuring things etc. Not to mention, your Cloud SQL server won't be "serverless", and you'll be paying for it while it's not getting any requests as well.
(P.S. This would be a good exercise to try out and write a blog post about! If you do that, add it to the awesome-cloudrun repo.)

What is the benefit of dockerize the SPA web app

I dockerize my SPA web app by using nginx as base image then copy my nginx.conf and build files. As Dockerize Vue.js App mention I think many dockerizing SPA solutions are similar.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
So what's the benefit of dockerizing SPA?
----- update -----
One answer said "If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it." I don't agree with that at all. I don't need the latest version of nginx, after all I only use the basic feature of nginx. Some of my team members just use the nginx version bundled with linux when doing development. If my docker image uses the latest ngixn it actually creates the different environment than the development environment.
I realize my question will be probably closed b/c it will be seen as opinion based. But I have googled it and can't find a satisfied answer.
If I don't use docker I will first build SPA code then copy the build files to nginx root directory (After install/set up nginx I barely change it at all)
This is a security concern... fire and forget is what it seems is being done here regarding the server.
If the app is dockerized each time you are releasing a new version of your app the Nginx server gets all the new updates available for it.
Bear in mind that if your App does not release new versions in a weekly bases then you need to consider to rebuild the docker images at least weekly in order to get the updates and keep everything up to date with the last security patches.
So what's the benefit of dockerizing SPA?
Same environment across development, staging and production. This is called 100% parity across all stages were you run your app, and this true for no matter what type of application you deploy.
If something doesn't work in production you can pull the docker image by the digest and run it locally to debug and try to understand where is the problem. If you need to ssh to a production server it means that you automation pipeline have failed or maybe your are not even using one...
Tools like Webpack compile Javascript applications to static files that can then be served with your choice of HTTP server. Once you’ve built your SPA, the built files are indistinguishable from pages like index.html and other assets like image files: they’re just static files that get served by some HTTP server.
A Docker container encapsulates a single running process. It doesn’t really do a good job at containing these static files per se.
You’ll frequently see “SPA Docker containers” that run a developer-oriented HTTP server. There’s no particular benefit to doing this, though. You can get an equally good developer experience just by developing your application locally, running npm run build or whatever to create a dist directory, and then publishing that the same way you’d publish other assets. An automation pipeline is helpful here, but this isn’t a task Docker makes wildly simpler.
(Also remember when you do this that the built application runs on the user’s browser. That means it can’t see any of the Docker-internal networking machinery: it can’t use the Docker-internal IP addresses and it can’t use the built-in Docker DNS service. Everything it reaches has to be on docker run -p published ports and it has to use a DNS name that reaches the host. The browser literally has no idea Docker is involved in this at all.)
There are a few benefits.
Firstly, building a Docker image means you are explicitly stating what your application's canonical run-time is - this version of nginx, with that SSL configuration, whatever. Changes to the run-time are in source control, so you can upgrade predictably and reversibly. You say you don't want "the latest version" - but what if that latest version patches a critical security vulnerability? Being able to upgrade predictably, on "disposable" containers means you upgrade when you want to.
Secondly, if the entire development team uses the same Docker image, you avoid the challenges with different configurations giving the "it works on my machine" response to bugs - in SPAs, different configurations of nginx can lead to different behaviour. New developers who join the team don't have to install or configure anything, and can use any device they want - they can be certain that what runs in Docker is the same as it is for all the other developers.
Thirdly, by having all your environments containerized (not just development, but test and production), you make it easy to move versions through the pipeline and only change the environment-specific values.
Now, for an SPA, these benefits are real, but may not outweigh the cost and effort of creating and maintaining Docker images - inevitably, the Docker image becomes a bottleneck and the first thing people blame. I'd only invest in it if you see lots of environment-specific pain (suggesting having a consistent run-time environment is necessary), or if you see lots of "it works on my machine" type of bug.

I have set up the sample app on google cloud platform succesfully. After running a test, I am now wondering where the assets that I create are stored?

I have created a hyperledger google cloud platform installation.
Secondly I then installed the hyperledger sample network. All this went fine. Also the asset creation went find after I created the static IP on the VM. I now am wondering where my "hello world" asset remained.
I saw that a verification peer should have a /var/hyperledger...
Doing the default google cloud platform installation, what are my peers? This seems all to be hidden. Does that mean that the data is just "out there"?
I am checking how to tweak the google cloud platform installation to have private data storage now.
When you are using the Google cloud platform and using VM to run all your things. then all your information is being stored in the persistant disk you selected while install the platform.
Regarding Assets, you can not see physical the assets in fabric, those are stored in Level DB or CouchDb. default configuration of the fabric is LevelDB.
if you configure CouchDb then you can see the data in URL. Hope, this helps.

Resources