Memory consumption with many goroutines - memory

I've put up an http server written in Go and it's getting over a thousands visitors a day. I have an accumulating Goroutine problem(never freed). Over the course of a day I seem to get a little over a thousand of thousand of new Goroutines from the http server.I used pprof to check where the problem comes from and I got that:
Link: memory consumption: SVG pprof
Heap:
Below are tow of my goroutines
500 # 0x410255 0x5a9255 0x5a9e25 0x5aa615 0x5990cf 0x5ada95 0x59d23f 0x4367b1
# 0x5a9255 net._C2func_getaddrinfo+0x55 /usr/local/go/src/net/:26
# 0x5a9e25 net.cgoLookupIPCNAME+0x1c5 /usr/local/go/src/net/cgo_unix.go:96
# 0x5aa615 net.cgoLookupIP+0x65 /usr/local/go/src/net/cgo_unix.go:148
# 0x5990cf net.lookupIP+0x5f /usr/local/go/src/net/lookup_unix.go:64
# 0x5ada95 net.func·026+0x55 /usr/local/go/src/net/lookup.go:79
# 0x59d23f net.(*singleflight).doCall+0x2f /usr/local/go/src/net/singleflight.go:91
157871 # 0x423985 0x4239f8 0x411464 0x410c93 0x5a9d68 0x5aa615 0x5990cf 0x5ada95 0x59d23f 0x4367b1
# 0x5a9d68 net.cgoLookupIPCNAME+0x108 /usr/local/go/src/net/cgo_unix.go:85
# 0x5aa615 net.cgoLookupIP+0x65 /usr/local/go/src/net/cgo_unix.go:148
# 0x5990cf net.lookupIP+0x5f /usr/local/go/src/net/lookup_unix.go:64
# 0x5ada95 net.func·026+0x55 /usr/local/go/src/net/lookup.go:79
# 0x59d23f net.(*singleflight).doCall+0x2f /usr/local/go/src/net/singleflight.go:91
Here we can see that singleflight.go took the most of goroutints, it's a native library of Go.
my code bolocked in this function
func getXmlVast(url string) (string, error) {
resp, err := http.Get(url)
if err != nil {
return "", errors.New("request error A(" + err.Error() + ")")
}
defer resp.Body.Close()
// read xml http response
xmlData, err := ioutil.ReadAll(resp.Body)
if err != nil {
return "", errors.New("request error B(" + err.Error() + ")")
}
return string(xmlData), nil
}
Why Go never freed the goroutines and what singleflight.go do.

I updated my Go version from 1.4 to 1.5 and it soloves the problem.
I did some reserch before to find where the problem comes from I noticed that a lot of peopel have the same problem and no one know why.I think thtat the problem was in the http/net library because as I said in my question the function who took the most of goroutints is singleflight and this function is called by http.Get(url)

Related

Docker (Moby) golang image build logs are base64 encoded

I'm looking for help with extracting the image build logs from a dockerd (buildkit/moby) image build request sent by a Golang based client using the docker client libraries.
I can request the image build fine and receive the log stream of json messages then decode them as Jsonmessage instances. But the actual log lines from the builder appear to be base64 encoded in an aux field of each json message.
I can decode the base64 easily enough, but they seem to include odd terminal control characters and possibly mis-encoded data, which makes me wonder if they're actually a base64 encoding of some kind of struct I'm supposed to unpack.
What confuses me is that I can't find anything in the docker-ce or moby code that seems to base64-decode an 'aux' payload when processing logs when displaying build progress for docker buildx build.
As far as I can tell, the buildx code doesn't do anything special to the aux payload: https://github.com/docker/docker-ce/blob/523cf7e71252013fbb6a590be67a54b4a88c1dae/components/cli/cli/command/image/build_buildkit.go#L325
For example, trimmed-down build code like:
image := Image{Name: "test"}
contextreader, err := archive.TarWithOptions(buildConf.Build.Context, &archive.TarOptions{})
if err != nil {
return err
}
imageBuildResponse, err := b.client.ImageBuild(
ctx,
contextreader,
types.ImageBuildOptions{
Version: types.BuilderBuildKit,
Context: contextreader,
Dockerfile: dockerfile,
})
if err != nil {
return err
}
defer imageBuildResponse.Body.Close()
buf := bytes.NewBuffer(nil)
imageID := ""
writeAux := func(msg jsonmessage.JSONMessage) {
if msg.ID == "moby.image.id" {
var result types.BuildResult
if err := json.Unmarshal(*msg.Aux, &result); err != nil {
panic("don't do this in your real code")
}
imageID = result.ID
return
}
return err
}
err := jsonmessage.DisplayJSONMessagesStream(imageBuildResponse.Body, buf, os.Stderr.Fd(), false /* not terminal */, writeAux)
if err != nil {
if jerr, ok := err.(*jsonmessage.JSONError); ok {
// If no error code is set, default to 1
if jerr.Code == 0 {
jerr.Code = 1
}
return fmt.Errorf("error while building image: %s", jerr.Message)
}
}
will write json payloads to stderr like
{"id":"moby.buildkit.trace","aux":"Cn0KR3NoYTI1NjozZThhMzMxYmRkZGFjNWZkYmNjOGVhMDFmYWFhYmM3MjA0MDkwMmYwNjdmYzRhOGY0NDJmMmIzYWVlN2RkNGIyGiRbaW50ZXJuYWxdIGxvYWQgcmVtb3RlIGJ1aWxkIGNvbnRleHQqDAiYw8KaBhCykpCqAg=="}
{"id":"moby.buildkit.trace","aux":"CokBCkdzaGEyNTY6M2U4YTMzMWJkZGRhYzVmZGJjYzhlYTAxZmFhYWJjNzIwNDA5MDJmMDY3ZmM0YThmNDQyZjJiM2FlZTdkZDRiMhokW2ludGVybmFsXSBsb2FkIHJlbW90ZSBidWlsZCBjb250ZXh0KgwImMPCmgYQspKQqgIyCgiZw8KaBhD08F0="}
The base64 strings here don't decode as valid utf-8, and they don't make sense as ISO-8859-1 either. E.g. with a utf-8 console encoding:
$ base64 -d <<<'Cn0KR3NoYTI1NjozZThhMzMxYmRkZGFjNWZkYmNjOGVhMDFmYWFhYmM3MjA0MDkwMmYwNjdmYzRhOGY0NDJmMmIzYWVlN2RkNGIyGiRbaW50ZXJuYWxdIGxvYWQgcmVtb3RlIGJ1aWxkIGNvbnRleHQqDAiYw8KaBhCykpCqAg=='
}
Gsha256:3e8a331bdddac5fdbcc8ea01faaabc72040902f067fc4a8f442f2b3aee7dd4b2�$[internal] load remote build context*
������
It looks like it's probably a struct, but for the life of me I can't find what decodes and processes it.
So of course I find the answer while writing up the SO question...
The writeAux function in build_buildkit.go calls the write method of a tracer instance, and that does the real work. I must've been blind.
The messages are serialized instances of StatusResponse from the github.com/moby/buildkit/api/services/control package. They are unmarshalled from base64-decoded byte sequences and inspected. If you want logs and to skip everything else, just look for instances with non-empty Logs member arrays, e.g. something like this within the above writeAux function:
} else if msg.ID == "moby.buildkit.trace" {
// Process the message like
// https://github.com/docker/docker-ce/blob/523cf7e71252013fbb6a590be67a54b4a88c1dae/components/cli/cli/command/image/build_buildkit.go#L386
// the 'tracer.write' method in build_buildkit.go
var resp controlapi.StatusResponse
var dt []byte
// ignoring all messages that are not understood
if err := json.Unmarshal(*msg.Aux, &dt); err != nil {
return
}
if err := (&resp).Unmarshal(dt); err != nil {
return
}
for _, v := range resp.Vertexes {
fmt.Printf("layer: %+v", v)
}
for _, v := range resp.Statuses {
fmt.Printf("status: %+v", v)
}
for _, v := range resp.Logs {
fmt.Printf("log: msg.Msg)
}
}
The json.Unmarshal and controlapi.StatusResponse.Unmarshal do the base64 decoding and unpacking for you.

Flush data added to websocket

I'm writing a speed test, but i'm having trouble on the client side for uploading.
I have a the following setup, which basically continues to write data into the socket while a condition is true, and then closes the socket:
var ws = await createWebSocket(sb.serverAddress, sb.authToken);
while (condition) {
var bytes = generateRandomBytes(_BUFFER_SIZE_BYTES);
ws.add(bytes);
print('added');
var megabits = (bytes.length * 8) / 1000000;
channel.sink.add(megabits);
}
await ws.close();
My problem is that I can't work out how to wait for the bytes to be accepted by the underlying buffer. Even if I set _BUFFER_SIZE_BYTES to an huge size it still loops at break neck speed printing out added, where I really want to wait until all the bytes are accepted by the send buffer (having been accepted by the server) before adding a new list of bytes.
With an http post request you can do: await postReq.flush();, but I don't see any such method for web sockets.
Ok so I think I have a reasonable solution to this problem.
Client side has to wait for a response from the server before sending more bytes:
var bytes = generateRandomBytes(_CHUNK_SIZE_BYTES);
ws.listen((data) async {
ws.add(bytes);
var megabits = (bytes.length * 8) / 1000000;
channel.sink.add(megabits);
}
});
Server (Go) sends a message to the client signalling that it can send a chunk, and then reads the entire response from the client, before signalling to the client that it is ready to accept another one:
for start := time.Now(); time.Since(start) < time.Second*maxDuration; {
err := conn.WriteMessage(websocket.TextMessage, []byte("next"))
if err != nil {
break
}
// will get an error if try writing to closed socket
_, bytes, err := conn.ReadMessage()
if err != nil {
fmt.Println(err)
break
}
fmt.Println(len(bytes))
}
I think this solution is ok. I've set the chunk size to 10Mb which seems to work ok. Let me know if anyone has a better idea.

How to extract list of docker images inside GCP artifact registry

I want to list all the repositories inside GCP artifact registry in golang.
Current code : (https://pkg.go.dev/cloud.google.com/go/artifactregistry/apiv1beta2)
c, err := artifactregistry.NewClient(ctx, option.WithCredentialsFile("<service account json>"))
if err != nil {
// no error here
}
defer c.Close()
req := &artifactregistrypb.ListRepositoriesRequest{
Parent: "<project-id>",
}
it := c.ListRepositories(ctx, req)
for {
resp, err := it.Next()
if err == nil {
fmt.Println("resp", resp)
} else {
fmt.Println("err ==>", err)
}
}
The error prints: Invalid field value in the request. OR sometimes I get Request contains an invalid argument
What am I doing wrong here ? and What does the "Parent" mean ? (in ListRepositoriesRequest)
On further digging, I found that the value passed in the Parent goes to : "x-goog-request-params", what should be the correct format for this ?
Sometime the libraries/api are well documented, sometime not...
Here the REST API that you can test in the API explorer (right hand side bar). After some tests, the parent must have that format
projects/<PROJECT_ID>/locations/<REGION>
Try with that to solve your issue

How to wrap exec.Command inside an io.Writer

I'm trying to compress a JPEG image in go using mozjpeg. Since it doesn't have official go binding, I think I'll just invoke its CLI to do the compression.
I try to model the usage after compress/gzip:
c := jpeg.NewCompresser(destFile)
_, err := io.Copy(c, srcFile)
Now the question is, how do I wrap the CLI inside Compresser so it can support this usage?
I tried something like this:
type Compresser struct {
cmd exec.Command
}
func NewCompressor(w io.Writer) *Compresser {
cmd := exec.Command("jpegtran", "-copy", "none")
cmd.Stdout = w
c := &Compresser{cmd}
return c
}
func (c *Compresser) Write(p []byte) (n int, err error) {
if c.cmd.Process == nil {
err = c.cmd.Start()
if err != nil {
return
}
}
// How do I write p into c.cmd.Stdin?
}
But couldn't finish it.
Also, a second question is, when do I shut down the command? How to shut down the command?
You should take a look at the Cmd.StdinPipe. There is an example in the documentation, which suits your case:
package main
import (
"fmt"
"io"
"log"
"os/exec"
)
func main() {
cmd := exec.Command("cat")
stdin, err := cmd.StdinPipe()
if err != nil {
log.Fatal(err)
}
go func() {
defer stdin.Close()
io.WriteString(stdin, "values written to stdin are passed to cmd's standard input")
}()
out, err := cmd.CombinedOutput()
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", out)
}
In this case, CombinedOutput() executes your command, and the execution is finished, when there are no more bytes to read from out.
As per Kiril's answer, use the cmd.StdInPipe to pass on the data you receive to Write.
However, in terms of closing, I'd be tempted to implement io.Closer. This would make *Compresser automatically implement the io.WriteCloser interface.
I would use Close() as the notification that there is no more data to be sent and that the command should be terminated. Any non-zero exit code returned from the command that indicates failure could be caught and returned as an error.
I would be wary of using CombinedOutput() inside Write() in case you have a slow input stream. The utility could finish processing the input stream and be waiting for more data. This would be incorrectly detected as command completion and would result in an invalid output.
Remember, the Write method can be called an indeterminate number of times during IO operations.

Parsing a golang time object from an incomplete string

I have the following date string: 2017-09-04T04:00:00Z
I need to parse this string into a golang time in order to have uniform data across my application. Here is the code so far:
parsedTime := "2017-09-04T04:00:00Z"
test, err := time.Parse(time.RFC3339, parsedTime)
check(err)
fmt.Println(test)
I get the following error when I try to run the program:
": extra text: 0:00 +0000 UTC parsing time "2017-09-04T04:00:00Z
How can I either add the extra text that it is looking for or get the parser to stop looking after the Z?
I have also tried the following:
parsedTime := "2017-09-04T04:00:00Z"
test, err := time.Parse("2006-01-02T03:04:05Z", parsedTime)
check(err)
fmt.Println(test)
Which returns the following error:
": extra text: 017-09-04T04:00:00Z
Both formats you used work with the current version of go: https://play.golang.org/p/Typyq3Okrd
var formats = []string{
time.RFC3339,
"2006-01-02T03:04:05Z",
}
func main() {
parsedTime := "2017-09-04T04:00:00Z"
for _, format := range formats {
if test, err := time.Parse(format, parsedTime); err != nil {
fmt.Printf("ERROR: format %q resulted in error: %v\n", format, err)
} else {
fmt.Printf("format %q yielded %s\n", format, test)
}
}
}
Can you provide a working example that demonstrates your problem? You can use the go playground for shareable snippets.

Resources