Can't create datasets and load images in COCO annotator - docker

I'm trying to annotate images with COCO key points for pose estimation using https://github.com/jsbroks/coco-annotator. As described in the Installation section I cloned the repo. I installed Docker and Docker-compose. Following this I started the container with $ docker-compose up and it is running.
I am now on the website https://annotator.justinbrooks.ca/, I created one user and datasets but they do not appear in the repo datasets/ folder. I tried to create them manually and to load images in them but they do not appear on the website graphic interface.
I tried to scan, reload the webpage, create other datasets but it does not work. The container seems to work properly, it detects when I put an image in the datasets/ folder but it throws some errors.
Here is the last lines (I can post the whole log):
annotator_webclient | [File Watcher] File /datasets/haricot.jpg for created
annotator_webclient | [File Watcher] Adding new file to database: /datasets/image
annotator_message_q | 2019-05-16 13:01:08.841 [error] <0.461.0> closing AMQP connection <0.461.0> (172.18.0.4:42614 -> 172.18.0.2:5672):
annotator_message_q | missed heartbeats from client, timeout: 60s
Am I missing something fundamental or there is a bug?
I'm using Safari on macOS and I also tried Firefox on Ubuntu 18. I'm not behind a proxy, but maybe some ports are not open or something like this.

Creator of COCO Annotator here.
I think you are missing a fundamental concept. The demo runs on the VM and has nothing to do with your local instance.
Once you have the docker-compose running you can access your instance local by typing http://localhost:5000/ .

Related

Problem understanding how to, if at all possible, run my docker file (.tar)

I received a .tar docker file from a friend that told me that it should contain all dependences for a program that I've been struggling to get working and that all I need to do is "run" the Docker file. The Docker file is of a .tar format and is around 3.1 GB. The program this file was setup to run is call opensimrt. The GitHub link to the file is as follows:
https://github.com/mitkof6/OpenSimRT
The google drive link to the Docker file is as follows:
https://drive.google.com/file/d/1M-5RnnBKGzaoSB4MCktzsceU4tWCCr3j/view?usp=sharing
This program has many dependencies, some big ones to note is that it runs off ubuntu 18.04 and Opensim 4.1.
I'm not a computer scientist by any means, so I've been struggling to even learn to do docker basics like load and run a image. However, I desperately need this program to work. If you have any steps or advice on how to run this .tar I'd greatly appreciate it. Alternatively if you are able to find a way to get opensimrt up and running and can post those steps I'd be more than happy with that solution as well.
I've tried the commands "docker run" and "docker load" followed by their respective tags, file paths, args..etc. However, even when I fix various issues I always get stuck with a missing var/lib/docker/tmp/docker-import-....(random numbers) file. The numbers change every so often when trying to solve the issue, but eventually I always end up getting some variation of this error: Error response from daemon: open /var/lib/docker/tmp/docker-import-3640220538/bin/json: no such file or directory.
ps: I have extracted the .tar already and there is no install guide/instruction, .exe, install application. As a result I'm not sure how to get the program installed and running.

Rails configuration with docker-compose remote sdk fails on RubyMine

So I work with RubyMine, and I configured my docker-compose integration like in this tutorial, but I have an error when I simply hit the 'run' button:
ERROR: Duplicate mount points: [/home/kyrela/railsproject:/railsproject:rw, /home/kyrela/railsproject:/railsproject:rw]
A simple docker-compose up from the terminal works.
I founded when trying to launch the same command as RubyMine, but removing some arguments, that it was caused by the docker-compose.override.[number].yml file generated automatically by RubyMine, based on my configuration. Without it, it works perfectly.
But my configuration is extremely basic :
I only set the IP adress and port, the same as the ones it currently use with a simple docker-compose up from the terminal, I set the remote SDK (the one from my container), and the docker-compose method to up. That's all, I leaved the rest blank or with the default value.
After some research on this error, it's apparently a bug that can be fixed with a simple docker-compose restart. It didn't worked for me.
Does someone know how to get rid of this error ?
If some informations are missing, just leave a comment and I will edit my message with the specified informations.

How to change the system variable in Geoserver

I'm on Linux.I'm using postgresql - geoserver - openlayers.
I want to display a shapefile with GeoServer. I store it in Postgresql and import the table on Geoserver. The size of the shapefile is 2.2GB.
When I want to display my shapefile with the Openlayers viewer (on Geoserver), I have a white screen and this error is the logs:
ERROR [geoserver.ows] org.geoserver.platform.ServiceException: Rendering process failed ....
Caused by: java.lang.RuntimeException: org.postgresql.util.PSQLException: ERROR: could not write to tuplestore temporary file: No space left on device where: SQL function "st_force_2d" statement 2
I saw here: https://docs.geoserver.org/stable/en/user/services/wfs/outputformats.html, that's the limit size is 2GB for shapefile but we can modify this limit changing the system variables GS_SHP_MAX_SIZE.
How can I do that ? I checked on Internet but impossible to find a solution.
In the link you mentioned it said:
it’s possible to modify those limits by setting the GS_SHP_MAX_SIZE and GS_DBF_MAX_SIZE system variables to a different value.
So I think it's similar to GEOSERVER_DATA_DIR config.
For binaries installation: You should change OS system variables. I'm not sure but, the command is something like this:
$ export GS_SHP_MAX_SIZE=Limit of .shp size in bytes
$ export GS_SHP_MAX_SIZE=3000000000
If it didn't work search for changing system var in your Linux dist.
For web archive installation: You should change the webserver or GeoServer configuration. There are 2 ways of doing it:
Context parameter: Find and edit web.xml in WEB-INF folder. then add this context parameter at root element(<web-app> tag)
<context-param>
<param-name>GS_SHP_MAX_SIZE</param-name>
<param-value>Limit of .shp size in bytes</param-value>
</context-param>
Java system property: It's very similar to binaries installation except you should add system variable for the webserver. If you are using tomcat add this to your system variables.
$ export CATALINA_OPTS="-GS_SHP_MAX_SIZE=Limit of .shp size in bytes"
$ export CATALINA_OPTS="-GS_SHP_MAX_SIZE=3000000000"
Be careful about changing java system property! it will effect whole Apache tomcat and might cause problem in other web apps installed.

Docker failing to see updated fixtures CSV in rspec test directory

This one is quite strange.
I am running a very typical Docker container that holds a Rails API. Inside this API, I have an endpoint which takes an upload of a CSV and does some things and stuff.
Here is the exact flow:
vim spec/fixtuers/bid_update.csv
# fill it with some data
# now we call the spec that uses this fixture
docker-compose run --rm web bundle exec rspec spec/requests/bids_spec.rb
# and now the csv is loaded and I can see it as plaintext
However, after creating this, I decided to change the content of the CSV. So I do this, adding a column and respective value to it for each piece.
Now, however, when we run our spec again after saving this it has the old version of the CSV. The one originally used at the breakpoint in the spec.
cat'ing out the CSV shows it clearly should have the new content.
Restarting the VM does nothing. The only solution I've found is to docker-machine rm dev and build a new machine (my main one for this is called dev).
I am entirely perplexed as to what could cause this or a simple means to fix it (building with all those images takes a while).
Ideas? Inform me I'm an idiot and I just had to press 0 for an operator and they would have fixed it?
Any help appreciated :)
I think it could be an issue with how virtualbox shares folders with your environment. More information here https://github.com/mitchellh/vagrant/issues/351#issuecomment-1339640

Cassandra fails to initialize with error "Cannot add table 'role_members' to non existing keyspace 'system_auth'"

I am running a Cassandra cluster in Docker containers using fleet for management. I am able to get the cluster up and running, but if I bring the units down with fleet and then back up again, the containers fail. The Cassandra logs has this entry on the second start.
Cannot add table 'role_members' to non existing keyspace 'system_auth'.
Fatal configuration error; unable to start server. See log for stacktrace.
INFO 20:59:34 InetAddress /172.17.8.102 is now DOWN
ERROR 20:59:34 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Cannot add table 'role_members' to non existing keyspace 'system_auth'.
at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:284) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:275) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.maybeAddTable(StorageService.java:1046) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.doAuthSetup(StorageService.java:1034) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:967) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:698) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:581) ~[apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:291) [apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:481) [apache-cassandra-2.2.0.jar:2.2.0]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:588) [apache-cassandra-2.2.0.jar:2.2.0]
I can't find any information on this particular error, and I really have no idea why it's happening. The closest information I can find is that the system_auth table needs to be configured specially if you are not using the default AllowAllAuthenticator, but I am using this. I haven't changed it in the cassandra.yaml file.
Does anyone know why this might be happening?
Is it possible that you are using CassandraAuthorizer without using PasswordAuthenticator? I think that might not work and cause this particular error.
system_auth is not applicable to AllowAllAuthenticator, you need to use PasswordAuthenticator instead. If you configure cassandra.yaml in the following way:
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
And then restart cassandra, it should create the system_auth keyspace for you. If you don't want to set up authorization, you can always use AllowAllAuthorizer instead. More information can be found here.
This turned out to be a rather unique configuration issue I had. I was mapping /var/lib/cassandra on the host to /var/lib/cassandra inside my docker container. But I was also inadvertently mapping /var/lib/cassandra/data to an auto-generated Docker directory on the host. As such when I stopped and restarted the containers, the data directory would disappear and Cassandra would fail as it tried to recreate data from the commitlog directory.
I got the problem just following the Datastax "Initializing a multiple node cluster (single data center)" tutorial.
I solved the same problem deleting the whole content of /var/lib/cassandra and not only the content of /var/lib/cassandra/system/
Why?
I think Kris got the real problem source: when restarting, the C* service found the commitLog full and recovered by trying to reconstruct the commits found there, failing due to a different configuration and a different table structure...

Resources