Docker swarm - Finding correct swarm.Version when updating services - docker

I want to scale the number of replicas on a service using the Go SDK.
This is the function that (I think) accomplishes this:
func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version swarm.Version, service swarm.ServiceSpec, options types.ServiceUpdateOptions) (types.ServiceUpdateResponse, error)
Whenever I run it, however, I get an error:
Error response from daemon: rpc error: code = Unknown desc = update out of sequence
I'm pretty sure this occurs because the update sequence is out of order, and the version no. is used to order this.
But I don't know how to find the right version index!

Ah I figured it out!
Docker requires that you pass the same version and ServiceSpec when updating a service to avoid out-of-sequence updates, which was the issue I ran into.
You can easily get this through ServiceInspectWithRaw, which returns a swarm.Service, which has a version in its Meta.Version field. I totally missed that beforehand.

Related

No such property: ToInputStream for class: Script4

I have a situation where I want to import my graph data to database.I am having janusgraph(latest version) running with cassandra(version 3) and elasticsearch(version 6.6.0) using Docker.I have been suggested to use gryo format.So I have tried this command
graph.io(IoCore.gryo()).reader().create().readGraph(ToInputStream.from("my_graph.kryo"), graph);
but ended up with an error
No such property: ToInputStream for class: Script4
The documentation I am following is here.Please take a look and put me in a right procedure. Thanks in advance!
ToInputStream is not a function of Gremlin or JanusGraph. I believe that it is only a function of IBM Compose so unless you are running JanusGraph on that specific platform, this command will not work.
Versions of JanusGraph that utilize TinkerPop 3.4.x will support the io() step and this is the preferred manner in which to load gryo (as well as graphson and graphml) files.
Graph graph = ... // setup JanusGraph instance
GraphTraversalSource g = traversal().withGraph(graph); // might use withRemote() here instead depending on how you are connecting I suppose
g.io("graph.kryo").read().iterate()
Note that if you are connecting remotely - it seems you are sending scripts to the Docker instance given your error - then be sure that that "graph.kryo" file path is accessible to Docker. That's what's nice about ToInputStream from Compose as it allows you to access remote sources.

Errors Thrown When Inserting Data After Tag or Edge is Created

I'm using nebula docker but it throws me an error when I connected first. Then I retried everything is ok. Why is that? Must I retry every time?
This is likely caused by setting the heartbeat_interval_secs value to fetch data from the meta server. Conduct the following steps to resolve:
If meta has registered, check heartbeat_interval_secs value in console with the following command.
nebula> GET CONFIGS storage:heartbeat_interval_secs
nebula> GET CONFIGS graph:heartbeat_interval_secs
If the value is large, change it to 1s with the following command.
nebula> UPDATE CONFIGS storage:heartbeat_interval_secs=1
nebula> UPDATE CONFIGS graph:heartbeat_interval_secs=1
Note the changes take effect in the next period.

Rancher imageUuid not Unique

While adding a service i forgot to add a part in the url, should be xxx/yyy/zzz but input xxx/zzz.
After stopping the stack and trying to correct the error i get the following error:
Validation failed in API: imageUuid
I've tried removing all stacks and reinserting the service. It however won't add it again and i've found no way to remove it from the server.
Think this is a bug, i'm using a private repository, so the address is:
private.dns.name:8080/project/tools/toola
This is not supported by Rancher, the following does work:
private.dns.name:8080/tools/toola

Why application:start(disk_log) response error?

I want to start disk_log with application:start/1, but it gives error.
When use disk_log:start/0, it give ok.
(fish#yus-iMac.local)6> application:start(disk_log).
{error,{"no such file or directory","disk_log.app"}}
(fish#yus-iMac.local)7> disk_log:start().
ok
Why?
disk_log is not an application, but a service which belongs to the kernel application. So you cannot start it using application:start(disk_log), and it has not its own version (it is included in the kernel one).

Neo4j: Java API IndexHits<Node>.size() is 0

I'm trying to use the Java API for Neo4j but I seem to be stuck at IndexHits. If I query the DB with Cypher using
START n=node:types(type="Process") RETURN n;
I get all 2087 nodes of type "Process".
In my application I have the following lines
Index<Node> nodeIndex = db.index().forNodes("types");
IndexHits<Node> hits = nodeIndex.get("type", "Process");
System.out.println("Node index size: " + hits.size());
which leads my console to spit out a value of 0. Here, db is of course an instance of GraphDatabaseService.
I expected an object that included all 2087 nodes. What am I doing wrong?
The .size() question is just the prelude to my iterator
for(Node process : hits) { ... }
but that does not much when hits.size() == 0. According to http://api.neo4j.org/1.9.2/org/neo4j/graphdb/index/IndexHits.html this should be possible, provided there is something in hits.
Thanks in advance for your help.
I figured it out. Man, I feel so embarrassed...
It so happens that I had set up the DB_PATH to my default data folder, whereas the default storage folder is the default data folder plus graph.db. When I tried to run the code from that corrected DB_PATH I got an error saying that a lock file was in place because the Neo4j server was running. After shutting it down it worked perfectly.
So, if you happen to see the following error, just stop the server and run the code again:
Caused by: org.neo4j.kernel.StoreLockException: Could not create lock file
at org.neo4j.kernel.StoreLocker.checkLock(StoreLocker.java:74)
at org.neo4j.kernel.StoreLockerLifecycleAdapter.start(StoreLockerLifecycleAdapter.java:40)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:491)
I found on several forums that you cannot run the Neo4j server and use the Java API to query it at the same time.

Resources