I am using the remote API to create and start containers, but I am not sure how to pass in the command line arguments I normally would when creating from the local machine. Specifically, I am using this image, which requires a bunch of arguments I would normally do when running 'docker run [arguments][image]'. Any ideas?
For argument passing, you can like this
curl -X POST localhost:2375/containers/create -H "Content-Type: application/json" -d '{"Cmd":["ping", "8.8.8.8"], "Image": "ubuntu"}'
Also see: http://blog.flux7.com/blogs/docker/docker-tutorial-series-part-8-docker-remote-api
This depends on which arguments you want to set. Up to port bindings,here you can find how to do that. In general, you have to use the JSON objects passed as body in your create and in your start request.
Related
I have a webhook service which kicks off a buildWithParameters Jenkins job, and I want to be able to specify which buildservers are being used.
This is easy enough in the job configuration - I've added a Node parameter which lets me specify which nodes are valid, and when starting the job manually in the Jenkins web UI, I can select which nodes I want:
I'm able to kick off the job via curl using the buildWithParameters Jenkins feature:
curl -vvv 'https://webhook:examplepassword#jenkins.example.com/job/build-sideboard-plugin/buildWithParameters?token=exampletoken&GIT_REPO=example/repo&YUM_REPO=example&BUILDSERVER=sideboard.build.dev.xr'
However, I can't figure out how to specify multiple parameters. I expected that I'd simply be able to add a second &BUILDSERVER=xxx value and have that work, but running this:
curl -vvv 'https://webhook:examplepassword#jenkins.example.com/job/build-sideboard-plugin/buildWithParameters?token=exampletoken&GIT_REPO=example/repo&YUM_REPO=example&BUILDSERVER=sideboard.build.dev.xr&BUILDSERVER=sideboard.rocky8.build.dev.xr'
Returns a 500 error. I also tried providing a single value with a comma separating the two values, i.e.
curl -vvv 'https://webhook:examplepassword#jenkins.example.com/job/build-sideboard-plugin/buildWithParameters?token=exampletoken&GIT_REPO=example/repo&YUM_REPO=example&BUILDSERVER=sideboard.build.dev.xr,sideboard.rocky8.build.dev.xr'
but Jenkins interpreted that as a single Node value which didn't match any node since there's no node named sideboard.build.dev.xr,sideboard.rocky8.build.dev.xr. I got the same result when submitting the two values separated by a space.
Is there any way to get Jenkins to do this while still using the buildWithParameter functionality? I'd hate to have to redo the structure of our build triggering or switch to Jenkins Pipeline. Even making 2 different curl commands would be somewhat of a pain given how our webhooks are structured, so I'd love to be able to provide both parameters just like I can in the Jenkins web UI.
I don't think it is possible using the query parameters like you have tried, due to the fact the the plugin actually triggers two different builds.
What you can do is pass the parameters with the submit command as JSON data, which will simulate the trigger of the build with multiple servers selected.
The general syntax will be something like:
curl -u USER:PASSWORD --show-error \
--data 'json={"parameter":[{"name":"PARAMNAME","value":["node1","node2"]}]}' \
http://localhost:8080/job/remote/build?token=TOKEN
or in your case:
curl -u webhook:examplepassword --show-error \
--data 'json={"parameter":[{"name":"BUILDSERVER","value":["sideboard.build.dev.xr","sideboard.rocky8.build.dev.xr"]}]}' \
https://jenkins.example.com/job/build-sideboard-plugin/build?token=exampletoken
You can of course pass all other needed parameters alongside the BUILDSERVERin the JSON data:
curl -u webhook:examplepassword --show-error \
--data 'json={"parameter":[{"name":"BUILDSERVER","value":["sideboard.build.dev.xr","sideboard.rocky8.build.dev.xr"]},{"name":"YUM_REPO","value":"example"},{"name":"GIT_REPO","value":"=example/repo"}]}' \
https://jenkins.example.com/job/build-sideboard-plugin/build?token=exampletoken
In addition it is probably better to use the --data-urlencode instead of the --data flag for the curl commands to avoid encoding issues in case the values of your parameters have special characters.
More info on submitting jobs via Remote Access API is available Here.
Background
Firstly - a bit of background - I am trying to learn a bit more about Kafka and Kafka connect. In that vein I'm following along to an early release book 'Kafka Connect' by Mickael Maison and Kate Stanley.
Run Connectors
Very early on (Chapter 2 - components in a connect data pipeline) they give an example of 'How do you run connectors'. Note that the authors are not using Confluent. Here in the early stages, we are advised to create a file named sink-config.json and then create a topic called topic-to-export with the following line of code:
bin/kafka-topics.sh --bootstrap-server localhost:9092 \
--create --replication-factor 1 --partitions 1 --topic topic-to-export
We are then instructed to "use the Connect REST API to start the connector with the configuration you created"
$ curl -X PUT -H "Content-Type: application/json" \ http://localhost:8083/connectors/file-sink/config --data "#sink-config.json"
The Error
However, when I run this command it brings up the following error:
{"error_code":500,"message":"Cannot deserialize value of type `java.lang.String` from Object value (token `JsonToken.START_OBJECT`)\n at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 1, column: 36] (through reference chain: java.util.LinkedHashMap[\"config\"])"}
Trying to fix the error
Keeping in mind that I'm still trying to learn Kafka and Kafka Connect I've done a pretty simple search which has brought me to a post on StackOverflow which seemed to suggest maybe this should have been a POST not a PUT. However, changing this to:
curl -d #sink-config.json -H "Content-Type: application/json" -X POST http://localhost:8083/connectors/file-sink/config
simply brings up another error:
{"error_code":405,"message":"HTTP 405 Method Not Allowed"}
I'm really not sure where to go from here as this 'seems' to be the way that you should be able to get a connector to run. For example, this intro to connectors by Baeldung also seems to specify this way of doing things.
Does anyone have any ideas what is going on? I'm not sure where to start...
First, thanks for taking a look at the early access version of our book.
You found a mistake in this example!
To start a connector, the recommended way is to use the PUT /connectors/file-sink/config endpoint, however the example JSON we provided is not correct.
The JSON file should be something like:
{
"name": "file-sink",
"connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"tasks.max": 1,
"topics": "topic-to-export",
"file": "/tmp/sink.out",
"value.converter": "org.apache.kafka.connect.storage.StringConverter"
}
The mistake comes because there's another endpoint that can be used to start connectors, POST /connectors, and the JSON we provided is for that endpoint.
We recommend you use PUT /connectors/file-sink/config as the same endpoint can also be used to reconfigure connectors. In addition, the same JSON file can also be used with the PUT /connector-plugins/{connector-type}/config/validate endpoint.
Thanks again for spotting the mistake and reporting it, we'll fix the example in the coming weeks. We'll also reply to your emails about the other questions shortly.
How can I retrieve the list of configured acceptors in ActiveMQ Artemis via Jolokia/JMX (and curl)? I need to reload the acceptors after a TLS certificate update but looks like passing the acceptor name is mandatory. Unfortunately, I cannot just pass a static name because we use different acceptors, all using TLS – and don’t want to change the reloading code just because the acceptor config changed.
curl -s -f -u username:password -H 'Origin: localhost' 'http://127.0.0.1:8161/console/jolokia/read/org.apache.activemq.artemis:broker="borker-primary-0"'
shows the connectors, but not the acceptors.
This question is related to a change introduced in v2.18.0, see question on TLS certificate reload.
There is a getConnectors method on the main ActiveMQServerControl MBean which is why Jolokia's read command returns those values. However, there is no corresponding getAcceptors method, but you can use Jolokia's list command to effectively get the same information. Use something like this:
curl -s -f -u username:password -H 'Origin: localhost' 'http://127.0.0.1:8161/console/jolokia/list/org.apache.activemq.artemis:broker="borker-primary-0"'
Then look through the results for component=acceptors and you'll be able to find all the acceptors with their respective names.
This is a bit of a hack but a necessary one at this point given the lack of a management method to get the acceptors. I've opened ARTEMIS-3601 and sent a PR to deal with this use-case so in future versions this won't be necessary. You'll just be able to invoke getAcceptors or inspect them from the output of Jolokia's read command.
I am using Jenkins to remotely run an Ansible playbook via the Publish Over SSH command.
This command:
curl -k -v -X POST https://jenkins.myhost.com/job/Ansible_Deploy/build?token=<appToken> --user <myUser>:<userToken> --data-urlencode json='{"parameter":[{"name":"thisIsAList","value":["one","two","three"]}]}'
should trigger a post-build action to remotely execute the following command over SSH:
ansible-playbook /home/<myUser>/test/practice.yml --extra-vars "thisIsAList=$thisIsAList"
thisIsAList is a string parameter under Job Notifications, and the job is parameterized. I have successfully executed similar commands, but this one fails, assumingly because the value is a list. I have tried both "String Parameter" as well as "Multi-line String Parameter" to no avail.
Here's the stack trace:
org.kohsuke.stapler.WrongTypeException: Got type array but no lister class found for type class java.lang.String
at org.kohsuke.stapler.RequestImpl$TypePair.convertJSON(RequestImpl.java:723)
at org.kohsuke.stapler.RequestImpl.bindJSON(RequestImpl.java:478)
at org.kohsuke.stapler.RequestImpl.instantiate(RequestImpl.java:777)
Caused: java.lang.IllegalArgumentException: Failed to convert the value parameter of the constructor public hudson.model.StringParameterValue(java.lang.String,java.lang.String)
at org.kohsuke.stapler.RequestImpl.instantiate(RequestImpl.java:779)
at org.kohsuke.stapler.RequestImpl.access$200(RequestImpl.java:83)
at org.kohsuke.stapler.RequestImpl$TypePair.convertJSON(RequestImpl.java:678)
Caused: java.lang.IllegalArgumentException: Failed to instantiate class hudson.model.StringParameterValue from {"name":"thisIsAList","value":["one","two","three"]}
at org.kohsuke.stapler.RequestImpl$TypePair.convertJSON(RequestImpl.java:680)
at org.kohsuke.stapler.RequestImpl.bindJSON(RequestImpl.java:478)
at org.kohsuke.stapler.RequestImpl.bindJSON(RequestImpl.java:474)
at hudson.model.StringParameterDefinition.createValue(StringParameterDefinition.java:88)
at hudson.model.ParametersDefinitionProperty._doBuild(ParametersDefinitionProperty.java:165)
Note: This may be a duplicate of How to pass an array to a jenkins parameterized job via remote access api? but it hasn't gotten a valid response.
Since this level of nesting isn't detailed anywhere in the Jenkins or Ansible documentation I'll shed some light on the topic now that I've solved my issue.
The command:
ansible-playbook /home/<myUsr>/test/practice.yml --extra-vars "thisIsAList=$thisIsAList"
Should have declared thisIsAList to be a dictionary object. I.e.:
ansible-playbook /home/<myUsr>/test/practice.yml --extra-vars "{thisIsAList=$thisIsAList}"
Furthermore, the data in the cURL command should've been formatted differently like so:
json='{"parameter":[{"name":"thisIsAList","value":"[one,two,three]"}]}'
Note: the double-quotes are around the whole list, rather than the individual elements.
Finally, with further nested items (such as dict inside a list) you have to escape the double-quotes like so:
{"parameter":[{"name":"thisIsADictNestedInAList","value":"[{\"name\":\"numbers\",\"value\":[1s, 2s, 3s]}]"}]}
It seems, that at this level of nesting, it is no longer required to double-quote the lists; probably because the quotes one level up already lead it to be interpreted correctly.
This is a bit of a guess, based on a similar problem I have seen with a choice parameter. Any documentation I have found seems to be wrong about how to handle these. It shouldn't be a list. Try passing as a string with newlines separating the items.
curl -k -v -X POST https://jenkins.myhost.com/job/Ansible_Deploy/build?token=<appToken> --user <myUser>:<userToken> --data-urlencode json='{"parameter":[{"name":"thisIsAList","value":"one\ntwo\nthree"}]}'
Let me know if this works. I'm interested to find out.
Edit: (based on comments)
Would this work:
curl -k -v -X POST https://jenkins.myhost.com/job/Ansible_Deploy/build?token=<appToken> --user <myUser>:<userToken> --data-urlencode json='{"parameter":[{"name":"thisIsAList","value":"'{\"thisIsAList\": [\"one\",\"two\",\"three\"]}'"}]}'
The nested quotes get a bit ugly. If you are using pipeline or can massage the data in a shell script first, it would probably be cleaner.
I have tried:
curl -v --http1.0 --data "mac=00:00:00" -F "userfile=#/tmp/02-02-02-02-02-22" http://url_address/getfile.php
but it fails with the following message:
Warning: You can only select one HTTP request!
How can I send a mix of data and file by curl? Is it possible or not?
Thank you
Read up on how -F actually works! You can add any number of data parts and file parts in a multipart formpost that -F makes. -d however makes a "standard" clean post and you cannot mix -d with -F.
You need to first figure out which kind of post you want, then you pick either -d or -F depending on your answer.