I am trying to run foundation db using a docker image in Macos as below.
docker run --init --rm --name=fdb-0 foundationdb/foundationdb:6.2.22
Starting FDB server on 172.17.0.2:4500
This seems to start. But then I connect to fdb cli after logging into the container I get the following error statuses.
docker exec -it fdb-0 /bin/bash
root#9e8bb6985be5:/var/fdb# fdbcli
Using cluster file `/var/fdb/fdb.cluster'.
The database is unavailable; type `status' for more information.
Welcome to the fdbcli. For help, type `help'.
fdb> status
Using cluster file `/var/fdb/fdb.cluster'.
The coordinator(s) have no record of this database. Either the coordinator
addresses are incorrect, the coordination state on those machines is missing, or
no database has been created.
172.17.0.2:4500 (reachable)
Unable to locate the data distributor worker.
Unable to locate the ratekeeper worker.
I saw this issue https://forums.foundationdb.org/t/fdbcli-access-external-docker/1069. But, could not successfully run in host network as well. Any help would be appreciated.
Try running fdbcli with fdbcli --exec "configure new single memory ; status". This will start the new database with single redundancy memory mode.
Related
Trying to create sitecore 10 image using Docker on Windows 10 Enterprise locally but getting unhealthy containers. Please help me out as I have tried various steps that was updated in the forums.
Getting below errors:
Creating network "sitecore-xp0_default" with the default driver
Creating sitecore-xp0_solr_1 ... done
Creating sitecore-xp0_mssql_1 ... done
Creating sitecore-xp0_id_1 ... done
Creating sitecore-xp0_solr-init_1 ... done
Creating sitecore-xp0_xconnect_1 ... done
Creating sitecore-xp0_cm_1 ... done
ERROR: for cortexprocessingworker Container "992574e988e3" is unhealthy.
ERROR: for xdbautomationworker Container "992574e988e3" is unhealthy.
ERROR: for xdbsearchworker Container "992574e988e3" is unhealthy.
ERROR: for traefik Container "933b548fc2f9" is unhealthy.
ERROR: Encountered errors while bringing up the project.
Checked the following things:
docker-compose stop on Powershell.
docker-compose down on Powershell.
iisreset /stop on Powershell to make sure that the required ports are free.
docker-compose up -d on Powershell.
Stopped, removed the container and executed the command docker-compose.exe up --detach multiple times but no luck.
Check the .env file and make sure SITECORE_LICENSE has a value.
You may need to run the init.ps1 file.
Based on the logs now provided in the comments above, my suggestion would be to check the collection SQL connection string, to the shardsmanager database.
You can inspect the SQL container in docker for Windows and find the IP address of the SQL server. Connect to that using ssms and try connecting with the creds you have in current string.
Edit: looking again at the exception, it looks like it can't find the SQL server. Yet the CM server appears to not have a problem finding the same server. So compare the web/master/core connection string to the collection one. I'm guessing the SQL server portion will be different?
Please help, I'm not even sure if I am asking the right question here as there are many new components to my environment (the new components of my Environment are that I am new to developing in a Windows OS, New to Visual Studio Code IDE, and new to Docker within VS Code. This question could pertain to any of those factors.
Scenario:
I boot up my windows 10 machine, open VS Code, go to the command line from within VS Code (I am using a Git Bash Shell within VS Code). From this terminal I start my project with the following command: docker-compose up --build
as a result of running this command, I see the output in my terminal which indicates that all three of my containers have started up (Note this is a Flask application using Postgres with an Angular front end, each one has it's own container).
My application has a test API endpoint which when called responds with 'status ok'. Upon hitting that endpoint in postman I see a couple of lines of output in my terminal indicating that the application has processed the request for the specific URL. Everything is great.
Now I close all my applications and reboot the machine.
Upon rebooting I see a message from the system informing me that my docker containers are starting. This is good. But now I would like to get back to the state where I can see that same output that I saw when I ran the docker-compose up command, however this is no longer in the terminal on VS Code.
My question is, how can I get that output again without shutting down the docker containers and re-building them? Sure, I could do that, but this seems like an unnecessary step since the containers auto-restarted on system reboot.
Is there a log I can tail?
Additional info:
In the DockerFile for the API server. The server is started with the following command:
CMD ["./entrypoint.local.sh"]
In the entrypoint.local.sh file, the actual application is started with this command:
uwsgi --ini /etc/uwsgi.local.ini --chdir /var/www/my-application
Final note: This is not an application I created so I would like to avoid changing it since this will affect others on my team
In your terminal run: docker-compose logs --follow <name-of-your-service>
Or see every log stream for every service with docker-compose logs --follow
You can find the name of your docker-compose service by looking at each key under services: in your docker-compose.yml
If I am creating a docker image for one of my applications and publishing it in docker hub.
This image was downloaded by many users and ran that application in their containers and that generated application logs in a folder.
Now as a developer how can I see those application logs from my machine when that container is in remote computer for which I dont have access?
If it is a virtual machine, I can do ssh to that same machine and go to that folder anse see the logs for that particular application, so how it is possible with docker?
I am not talking about docker event logs, the logs generated by my python application with the logging module. Could you please help me on how to handle this case in dockers.
I don't have any experience with working on dockers.
docker exec can be used to run bash commands in a docker container. But in your case the containers are running in a remote machine and not in your local machine. So, in that case, you have 2 options.
1. ssh into the remote machine and then use docker exec command to check the logs.
2. Directly ssh to the docker container.
But, in both scenarios, you will need SSH access to the remote machines from the end users.
I hope this helps.
If your application writes log files to the container filesystem, this is one of a couple of good uses for Docker bind mounts. If the operator (the person running the container; not you, the original software author) starts the container with
docker run -v $PWD/logs:/app/logs ... you/yourimage
then they will be able to read the log files directly on their host system.
As the original application developer, you have no access to these logs. This is the same as every other (non-SaaS) application: the end user installs software on their system and runs it, but it's on a system you can't log into, so you can't directly see things like log files. The techniques for dealing with this are the same as anything else: when a user files a bug report make sure they provide a sufficient reproduction, log files, and relevant configuration, and reproduce the issue yourself locally.
I am learning CDH and Docker and didn't have prior experiene in setting up both tools. After reading documentation i managed to run CDH docker in mac environment and also completed example given in quick start guid. But when next day when i started mac book again to learn something new but i didn't find my previous work which i found very strange and even couldn't see container running which seems fine to me.
What i really want to do is i don't want to loose my work even after stoping docker container. could you please guid me how do i configure docker so that i will not loose my work even after restarting docker again?
Every instance of a docker run will allocate a new filesystem, essentially starting from scratch.
If you actually want to "save" your work, then you need to volume mount (using -v docker flag) your local filesystem into the container for at least the following directories.
HDFS Data Directory
NameNode Data Directory
/home/cloudera
I think the hadoop data folders are somewhere under /var/lib/hadoop-*, by default
The better alternative for saving your workloads would be the CDH VM, where it actually has a persistent HDD associated with it.
We are trying to create master-master cluster of two mongooseim instances on AWS in same virtual network..
All necessary ports are opened in AWS security group.
I suspect some issue with mongooseim setup on Ubuntu 14.04 LTS
After running join_cluster command on one of the node we get error as follows ( refer screenshot )
Error: {error,{badmatch,{error,eacces}}}
Attached screenshot with details.
Server configuration was not changed except vm args as shown in attached screenshot.
is this an issue with your binary ? or some other glitch ?
I ran into this issue myself. Mongoose uses erlangs internal mnesia storage system for a lot of information including cluster topology. The default path for mnesia's storage is /var/lib/mongooseim. When you do a mongooseimctl join_cluster ... it needs to wipe it's mnesia store and basically pulls a copy from the cluster it's joining. The issue arises because it also tries to delete /var/lib/mongooseim itself, which it won't have permissions to do because the running user mongooseim won't have permissions of the parent directory, /var/lib. Nor should it.
The way I fixed this was by creating a subdirectory which it could safely delete and recreate and configuring it to use that as it's mnesia directory:
sudo mkdir /var/lib/mongooseim/mnesia
sudo chown mongooseim:mongooseim /var/lib/mongooseim/mnesia
Configuration for the mnesia directory exists by default in /etc/mongooseim/app.config. In mine it was the third line. Originally looked like this:
{mnesia, [{dir, "/var/lib/mongooseim"}]},
I changed the path to the new directory I created
{mnesia, [{dir, "/var/lib/mongooseim/mnesia"}]},
After that, I stopped and started mongoose and was successfully able to join the cluster
mongooseimctl stop
mongooseimctl start && mongooseimctl started
mongooseimctl join_cluster mongooseim#other.node.name