How to extract list of docker images inside GCP artifact registry - docker

I want to list all the repositories inside GCP artifact registry in golang.
Current code : (https://pkg.go.dev/cloud.google.com/go/artifactregistry/apiv1beta2)
c, err := artifactregistry.NewClient(ctx, option.WithCredentialsFile("<service account json>"))
if err != nil {
// no error here
}
defer c.Close()
req := &artifactregistrypb.ListRepositoriesRequest{
Parent: "<project-id>",
}
it := c.ListRepositories(ctx, req)
for {
resp, err := it.Next()
if err == nil {
fmt.Println("resp", resp)
} else {
fmt.Println("err ==>", err)
}
}
The error prints: Invalid field value in the request. OR sometimes I get Request contains an invalid argument
What am I doing wrong here ? and What does the "Parent" mean ? (in ListRepositoriesRequest)
On further digging, I found that the value passed in the Parent goes to : "x-goog-request-params", what should be the correct format for this ?

Sometime the libraries/api are well documented, sometime not...
Here the REST API that you can test in the API explorer (right hand side bar). After some tests, the parent must have that format
projects/<PROJECT_ID>/locations/<REGION>
Try with that to solve your issue

Related

Docker (Moby) golang image build logs are base64 encoded

I'm looking for help with extracting the image build logs from a dockerd (buildkit/moby) image build request sent by a Golang based client using the docker client libraries.
I can request the image build fine and receive the log stream of json messages then decode them as Jsonmessage instances. But the actual log lines from the builder appear to be base64 encoded in an aux field of each json message.
I can decode the base64 easily enough, but they seem to include odd terminal control characters and possibly mis-encoded data, which makes me wonder if they're actually a base64 encoding of some kind of struct I'm supposed to unpack.
What confuses me is that I can't find anything in the docker-ce or moby code that seems to base64-decode an 'aux' payload when processing logs when displaying build progress for docker buildx build.
As far as I can tell, the buildx code doesn't do anything special to the aux payload: https://github.com/docker/docker-ce/blob/523cf7e71252013fbb6a590be67a54b4a88c1dae/components/cli/cli/command/image/build_buildkit.go#L325
For example, trimmed-down build code like:
image := Image{Name: "test"}
contextreader, err := archive.TarWithOptions(buildConf.Build.Context, &archive.TarOptions{})
if err != nil {
return err
}
imageBuildResponse, err := b.client.ImageBuild(
ctx,
contextreader,
types.ImageBuildOptions{
Version: types.BuilderBuildKit,
Context: contextreader,
Dockerfile: dockerfile,
})
if err != nil {
return err
}
defer imageBuildResponse.Body.Close()
buf := bytes.NewBuffer(nil)
imageID := ""
writeAux := func(msg jsonmessage.JSONMessage) {
if msg.ID == "moby.image.id" {
var result types.BuildResult
if err := json.Unmarshal(*msg.Aux, &result); err != nil {
panic("don't do this in your real code")
}
imageID = result.ID
return
}
return err
}
err := jsonmessage.DisplayJSONMessagesStream(imageBuildResponse.Body, buf, os.Stderr.Fd(), false /* not terminal */, writeAux)
if err != nil {
if jerr, ok := err.(*jsonmessage.JSONError); ok {
// If no error code is set, default to 1
if jerr.Code == 0 {
jerr.Code = 1
}
return fmt.Errorf("error while building image: %s", jerr.Message)
}
}
will write json payloads to stderr like
{"id":"moby.buildkit.trace","aux":"Cn0KR3NoYTI1NjozZThhMzMxYmRkZGFjNWZkYmNjOGVhMDFmYWFhYmM3MjA0MDkwMmYwNjdmYzRhOGY0NDJmMmIzYWVlN2RkNGIyGiRbaW50ZXJuYWxdIGxvYWQgcmVtb3RlIGJ1aWxkIGNvbnRleHQqDAiYw8KaBhCykpCqAg=="}
{"id":"moby.buildkit.trace","aux":"CokBCkdzaGEyNTY6M2U4YTMzMWJkZGRhYzVmZGJjYzhlYTAxZmFhYWJjNzIwNDA5MDJmMDY3ZmM0YThmNDQyZjJiM2FlZTdkZDRiMhokW2ludGVybmFsXSBsb2FkIHJlbW90ZSBidWlsZCBjb250ZXh0KgwImMPCmgYQspKQqgIyCgiZw8KaBhD08F0="}
The base64 strings here don't decode as valid utf-8, and they don't make sense as ISO-8859-1 either. E.g. with a utf-8 console encoding:
$ base64 -d <<<'Cn0KR3NoYTI1NjozZThhMzMxYmRkZGFjNWZkYmNjOGVhMDFmYWFhYmM3MjA0MDkwMmYwNjdmYzRhOGY0NDJmMmIzYWVlN2RkNGIyGiRbaW50ZXJuYWxdIGxvYWQgcmVtb3RlIGJ1aWxkIGNvbnRleHQqDAiYw8KaBhCykpCqAg=='
}
Gsha256:3e8a331bdddac5fdbcc8ea01faaabc72040902f067fc4a8f442f2b3aee7dd4b2�$[internal] load remote build context*
������
It looks like it's probably a struct, but for the life of me I can't find what decodes and processes it.
So of course I find the answer while writing up the SO question...
The writeAux function in build_buildkit.go calls the write method of a tracer instance, and that does the real work. I must've been blind.
The messages are serialized instances of StatusResponse from the github.com/moby/buildkit/api/services/control package. They are unmarshalled from base64-decoded byte sequences and inspected. If you want logs and to skip everything else, just look for instances with non-empty Logs member arrays, e.g. something like this within the above writeAux function:
} else if msg.ID == "moby.buildkit.trace" {
// Process the message like
// https://github.com/docker/docker-ce/blob/523cf7e71252013fbb6a590be67a54b4a88c1dae/components/cli/cli/command/image/build_buildkit.go#L386
// the 'tracer.write' method in build_buildkit.go
var resp controlapi.StatusResponse
var dt []byte
// ignoring all messages that are not understood
if err := json.Unmarshal(*msg.Aux, &dt); err != nil {
return
}
if err := (&resp).Unmarshal(dt); err != nil {
return
}
for _, v := range resp.Vertexes {
fmt.Printf("layer: %+v", v)
}
for _, v := range resp.Statuses {
fmt.Printf("status: %+v", v)
}
for _, v := range resp.Logs {
fmt.Printf("log: msg.Msg)
}
}
The json.Unmarshal and controlapi.StatusResponse.Unmarshal do the base64 decoding and unpacking for you.

http.Post() from a gRPC server to an http server returns EOF error on a docker-compose setup

I have a gRPC server (server) written in Go that a Python gRPC client (client) talks to. The server occasionally sends http post requests to a Go based http server (sigsvc). All of these instances run as docker instances spun up through docker-compose sharing the same docker network.
This is the section of code on server that creates and sends the http request:
b := new(bytes.Buffer)
txbytes, err := json.Marshal(tx)
if err != nil {
log.WithError(err).Error("failed to marshal transaction")
return nil, err
}
b.Write(txbytes)
resp, err := http.Post(sigsvc.signerURL, "application/json; charset=utf-8", b)
if err != nil {
log.WithError(err).Errorf("error signing transaction with signer %s", sigsvc.signerURL)
return nil, err
}
defer resp.Body.Close()
var signedTx types.Transaction
err = json.NewDecoder(resp.Body).Decode(&signedTx)
if err != nil {
log.WithError(err).Error("couldn't decode signed transaction")
return nil, err
}
sigsvc.signerURL maps to something like http://signer:6666/sign which is the endpoint on the http signer service that handles the request.
signer refers to the service name listed on a docker-compose.yml specification.
This is how the handler looks like on sigsvc:
func (sv *SignerSv) handleSignTx() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
log.Info("request received to sign transaction")
dump, err := httputil.DumpRequest(r, true)
if err != nil {
http.Error(w, fmt.Sprint(err), http.StatusInternalServerError)
}
log.Debugf("%q", dump)
if r.Body == nil {
log.Error("request body missing")
http.Error(w, "Please send a request body", 400)
return
}
log.Debugf("request body: %v", r.Body)
var tx types.Transaction
err = json.NewDecoder(r.Body).Decode(&tx)
if err != nil {
log.WithError(err).Error("failed to unmarshal transaction")
http.Error(w, err.Error(), 400)
return
}
log.WithFields(log.Fields{
"txhash": tx.Hash().Hex(),
"nonce": tx.Nonce(),
"to": tx.To().Hex(),
"data": tx.Data(),
"gasLimit": tx.Gas(),
"gasPrice": tx.GasPrice(),
"value": tx.Value(),
}).Debug("Decoded transaction from request body")
Both the request and request body are dumped successfully by the debug logs. However, apparently the line decoding the request body to the transaction type is never executed, since no error or decoded transaction logs are logged.
On server, I keep getting the following error:
error="Post http://signer:6666/sign: EOF"
This is how the request is logged on sigsvc:
msg="\"POST /sign HTTP/1.1\\r\\nHost: signer:6666\\r\\nConnection: close\\r\\nAccept-Encoding: gzip\\r\\nConnection: close\\r\\nContent-Length: 10708\\r\\nUser-Agent: Go-http-client/1.1\\r\\n\\r\\n{\\\"nonce\\\":\\\"0x0\\\",\\\"gasPrice\\\":\\\"0x2540be400\\\",\\\"gas\\\":\\\"0x15c285\\\",\\\"to\\\":null,\\\"value\\\":\\\"0x0\\\",\\\"input\\\":\\\"0x6080604055",\\\"v\\\":\\\"0x0\\\",\\\"r\\\":\\\"0x0\\\",\\\"s\\\":\\\"0x0\\\",\\\"hash\\\":\\\"0xab55920fb3d490fc55ccd76a29dfb380f4f8a9e5d0bda4155a3b114fca26da0a\\\"}\"
I have tried reproducing this error on similar but simplified docker setups, but I have failed at that.
I'm trying to understand the following:
If there is anything wrong with this code that is being exposed due
to a particular setup on docker?
Or, do I need to look at some docker setup specifics to debug the instances.
The problem was in the way the http handler code was logging the to field in this logrus call.
log.WithFields(log.Fields{
"txhash": tx.Hash().Hex(),
"nonce": tx.Nonce(),
"to": tx.To().Hex(),
"data": tx.Data(),
"gasLimit": tx.Gas(),
"gasPrice": tx.GasPrice(),
"value": tx.Value(),
}).Debug("Decoded transaction from request body")
Under specific circumstances, the tx.To() call returns nil, which implies calling tx.To().Hex() would lead to an error on account of trying to make a method call on a nil pointer. On the face of it, one would expect the log.WithFields() call to error out or panic, but instead the handler silently closes connection with the client side getting an EOF response.

How to wrap exec.Command inside an io.Writer

I'm trying to compress a JPEG image in go using mozjpeg. Since it doesn't have official go binding, I think I'll just invoke its CLI to do the compression.
I try to model the usage after compress/gzip:
c := jpeg.NewCompresser(destFile)
_, err := io.Copy(c, srcFile)
Now the question is, how do I wrap the CLI inside Compresser so it can support this usage?
I tried something like this:
type Compresser struct {
cmd exec.Command
}
func NewCompressor(w io.Writer) *Compresser {
cmd := exec.Command("jpegtran", "-copy", "none")
cmd.Stdout = w
c := &Compresser{cmd}
return c
}
func (c *Compresser) Write(p []byte) (n int, err error) {
if c.cmd.Process == nil {
err = c.cmd.Start()
if err != nil {
return
}
}
// How do I write p into c.cmd.Stdin?
}
But couldn't finish it.
Also, a second question is, when do I shut down the command? How to shut down the command?
You should take a look at the Cmd.StdinPipe. There is an example in the documentation, which suits your case:
package main
import (
"fmt"
"io"
"log"
"os/exec"
)
func main() {
cmd := exec.Command("cat")
stdin, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
go func() {
defer stdin.Close()
io.WriteString(stdin, "values written to stdin are passed to cmd's standard input")
}()
out, err := cmd.CombinedOutput()
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", out)
}
In this case, CombinedOutput() executes your command, and the execution is finished, when there are no more bytes to read from out.
As per Kiril's answer, use the cmd.StdInPipe to pass on the data you receive to Write.
However, in terms of closing, I'd be tempted to implement io.Closer. This would make *Compresser automatically implement the io.WriteCloser interface.
I would use Close() as the notification that there is no more data to be sent and that the command should be terminated. Any non-zero exit code returned from the command that indicates failure could be caught and returned as an error.
I would be wary of using CombinedOutput() inside Write() in case you have a slow input stream. The utility could finish processing the input stream and be waiting for more data. This would be incorrectly detected as command completion and would result in an invalid output.
Remember, the Write method can be called an indeterminate number of times during IO operations.

Golang - Docker API - parse result of ImagePull

I'm developing a Go script that uses the Docker API for the purposes of my project. After I login to my repository, I pull the Docker image I want, but the problem is that the ImagePull function returns an instance of io.ReadCloser, which I'm only able to pass to the system output via:
io.Copy(os.Stdout, pullResp)
It's cool that I can see the response, but I can't find a decent way to parse it and implement a logic depending on it, which will do some things if a new version of the image have been downloaded, and other things if the image was up to date.
I'll be glad if you share your experience, if you have ever faced this problem.
You can import github.com/docker/docker/pkg/jsonmessage and use both JSONMessage and JSONProgress to decode the stream but it's easier to call
DisplayJSONMessagesToStream: it both parses the stream and displays the messages as text. Here's how you can display the messages using stderr:
reader, err := cli.ImagePull(ctx, myImageRef, types.ImagePullOptions{})
if err != nil {
return err
}
defer reader.Close()
termFd, isTerm := term.GetFdInfo(os.Stderr)
jsonmessage.DisplayJSONMessagesStream(reader, os.Stderr, termFd, isTerm, nil)
The nice thing is that it adapts to the output: it updates the lines if this a TTY (the way docker pull does) but it doesn't if the output is redirected to a file.
#radoslav-stoyanov before use my example do
# docker rmi busybox
then run code
package main
import (
"encoding/json"
"fmt"
"github.com/docker/distribution/context"
docker "github.com/docker/engine-api/client"
"github.com/docker/engine-api/types"
"io"
"strings"
)
func main() {
// DOCKER
cli, err := docker.NewClient("unix:///var/run/docker.sock", "v1.28", nil, map[string]string{"User-Agent": "engine-api-cli-1.0"})
if err != nil {
panic(err)
}
imageName := "busybox:latest"
events, err := cli.ImagePull(context.Background(), imageName, types.ImagePullOptions{})
if err != nil {
panic(err)
}
d := json.NewDecoder(events)
type Event struct {
Status string `json:"status"`
Error string `json:"error"`
Progress string `json:"progress"`
ProgressDetail struct {
Current int `json:"current"`
Total int `json:"total"`
} `json:"progressDetail"`
}
var event *Event
for {
if err := d.Decode(&event); err != nil {
if err == io.EOF {
break
}
panic(err)
}
fmt.Printf("EVENT: %+v\n", event)
}
// Latest event for new image
// EVENT: {Status:Status: Downloaded newer image for busybox:latest Error: Progress:[==================================================>] 699.2kB/699.2kB ProgressDetail:{Current:699243 Total:699243}}
// Latest event for up-to-date image
// EVENT: {Status:Status: Image is up to date for busybox:latest Error: Progress: ProgressDetail:{Current:0 Total:0}}
if event != nil {
if strings.Contains(event.Status, fmt.Sprintf("Downloaded newer image for %s", imageName)) {
// new
fmt.Println("new")
}
if strings.Contains(event.Status, fmt.Sprintf("Image is up to date for %s", imageName)) {
// up-to-date
fmt.Println("up-to-date")
}
}
}
You can see API formats to create your structures (like my Event) to read them here https://docs.docker.com/engine/api/v1.27/#operation/ImageCreate
I hope it helps you solve your problem, thanks.
I have used similar approach for my purpose (not a moby client). Typically idea is same for reading stream response. Give it a try and implement yours.
Reading stream response of any response type:
reader := bufio.NewReader(pullResp)
defer pullResp.Close() // pullResp is io.ReadCloser
var resp bytes.Buffer
for {
line, err := reader.ReadBytes('\n')
if err != nil {
// it could be EOF or read error
// handle it
break
}
resp.Write(line)
resp.WriteByte('\n')
}
// print it
fmt.Println(resp.String())
However your sample response in the comment seems valid JSON structure. The json.Decoder is best way to read JSON stream. This is just an idea-
type ImagePullResponse struct {
ID string `json"id"`
Status string `json:"status"`
ProgressDetail struct {
Current int64 `json:"current"`
Total int64 `json:"total"`
} `json:"progressDetail"`
Progress string `json:"progress"`
}
And do
d := json.NewDecoder(pullResp)
for {
var pullResult ImagePullResponse
if err := d.Decode(&pullResult); err != nil {
// handle the error
break
}
fmt.Println(pullResult)
}

stub.GetHistoryKeys() reporting GetHistoryKeys() function undefined. when tried go build on my Chaincode

I am new to Hyperledger .I am using docker to run Hyperledger. Pulled hyperledger/fabric-peer:latest from Docker hub and
able to run stub.CreateTable() ,stub.GetRows() , stub.InsertRows() and some other functions in my Chaincode. But when i tried to run
stub.GetHistoryKeys() or stub.GetCompositeKeys() ...etc in my chaincode
It's reporting an error
stub.GetHistoryForKey undefined (type shim.ChaincodeStubInterface has no field
or method GetHistoryForKey)
I found that in my interface.go file there are no such functions . Googled a lot but found nothing .Can anyone tell the correct hyperledger/fabric-peer image to pull so that the above functions can run in Chaincode.
Please download latest version of fabric image, (
hyperledger/fabric-peer x86_64-1.1.0 ). It can be downloaded from script mentioned on hyperledger official website (Install binary)=> (https://hyperledger-fabric.readthedocs.io/en/latest/install.html. Can't paste url due to stackover flow policy issue). Once you have it. Create a go code. Simply add one json record on one key add then try to add some change some field of json and again add to same key. Once you have done that, fire the below code for gethistory:=>
func (s *SmartContract) getAllTransactionForid(APIstub shim.ChaincodeStubInterface, args []string) sc.Response {
fmt.Println("getAllTransactionForNumber called")
id:= args[0]
resultsIterator, err := APIstub.GetHistoryForKey(id)
if err != nil {
return shim.Error(err.Error())
}
defer resultsIterator.Close()
// buffer is a JSON array containing historic values for the number
var buffer bytes.Buffer
buffer.WriteString("[")
bArrayMemberAlreadyWritten := false
for resultsIterator.HasNext() {
response, err := resultsIterator.Next()
if err != nil {
return shim.Error(err.Error())
}
// Add a comma before array members, suppress it for the first array member
if bArrayMemberAlreadyWritten == true {
buffer.WriteString(",")
}
buffer.WriteString("{\"TxId\":")
buffer.WriteString("\"")
buffer.WriteString(response.TxId)
buffer.WriteString("\"")
buffer.WriteString(", \"Value\":")
// if it was a delete operation on given key, then we need to set the
//corresponding value null. Else, we will write the response.Value
//as-is (as the Value itself a JSON marble)
if response.IsDelete {
buffer.WriteString("null")
} else {
buffer.WriteString(string(response.Value))
}
buffer.WriteString(", \"Timestamp\":")
buffer.WriteString("\"")
buffer.WriteString(time.Unix(response.Timestamp.Seconds, int64(response.Timestamp.Nanos)).String())
buffer.WriteString("\"")
buffer.WriteString(", \"IsDelete\":")
buffer.WriteString("\"")
buffer.WriteString(strconv.FormatBool(response.IsDelete))
buffer.WriteString("\"")
buffer.WriteString("}")
bArrayMemberAlreadyWritten = true
}
if !bArrayMemberAlreadyWritten {
buffer.WriteString("No record found")
}
buffer.WriteString("]")
fmt.Printf("- getAllTransactionForNumber returning:\n%s\n", buffer.String())
return shim.Success(buffer.Bytes())
}
If still in doubt, please revert. I will give you my whole source code to make it work. But I hope this will make your problem go away :-)
At last I am able to figure it out to get the hyperledger images to support my chaincode.

Resources