Parsing a yaml file with Golang - parsing

I'm trying to parse a yaml file with Golang. I defined the following types:
type DockerNetwork struct {
MyNetwork struct {
driver string
} `yaml:"my_network"`
}
// DockerNetworks represent the docker Networks type
type DockerNetworks struct {
networks []DockerNetwork
}
so I have my unit test in place:
func TestDockerNetwork(t *testing.T) {
dn := DockerNetworks{}
var data = `
networks:
my_network:
driver: bridge
`
err := yaml.Unmarshal([]byte(data), &dn)
if err != nil {
log.Fatalf("error: %v", err)
t.Error("Could not Unmarshal the data")
}
log.Println(fmt.Sprintf("--- t:\n%v\n\n", dn))
}
I expected it to work, however I'm getting no input:
2016/12/14 13:38:12 --- t:
{{}}
what am I doing wrong?

My yaml file is a docker-compose.yml file, which is in the format described above (part of it). I agree with Marius that I have unexported fields, which is an error. The solution I used was map:
m := make(map[interface{}]interface{})
err = yaml.Unmarshal([]byte(data), &m)
if err != nil {
log.Fatalf("error: %v", err)
}
fmt.Printf("--- m:\n%v\n\n", m)

AFAIKT there to things, data structures with unexported fields and test data which doesn't match your data structures.
Something like this would give you a go:
type DockerNetwork struct {
MyNetwork struct {
Driver string `yaml:driver`
} `yaml:"my_network"`
}
type DockerNetworks struct {
Networks []DockerNetwork `yaml:networks`
}
And test data:
networks:
- my_network:
driver: bridge
Gives an output:
2016/12/14 23:47:23 --- t:
{[{{bridge}}]}
PASS
And It's hard to tell, what is "working" in your opinion.

Related

Docker (Moby) golang image build logs are base64 encoded

I'm looking for help with extracting the image build logs from a dockerd (buildkit/moby) image build request sent by a Golang based client using the docker client libraries.
I can request the image build fine and receive the log stream of json messages then decode them as Jsonmessage instances. But the actual log lines from the builder appear to be base64 encoded in an aux field of each json message.
I can decode the base64 easily enough, but they seem to include odd terminal control characters and possibly mis-encoded data, which makes me wonder if they're actually a base64 encoding of some kind of struct I'm supposed to unpack.
What confuses me is that I can't find anything in the docker-ce or moby code that seems to base64-decode an 'aux' payload when processing logs when displaying build progress for docker buildx build.
As far as I can tell, the buildx code doesn't do anything special to the aux payload: https://github.com/docker/docker-ce/blob/523cf7e71252013fbb6a590be67a54b4a88c1dae/components/cli/cli/command/image/build_buildkit.go#L325
For example, trimmed-down build code like:
image := Image{Name: "test"}
contextreader, err := archive.TarWithOptions(buildConf.Build.Context, &archive.TarOptions{})
if err != nil {
return err
}
imageBuildResponse, err := b.client.ImageBuild(
ctx,
contextreader,
types.ImageBuildOptions{
Version: types.BuilderBuildKit,
Context: contextreader,
Dockerfile: dockerfile,
})
if err != nil {
return err
}
defer imageBuildResponse.Body.Close()
buf := bytes.NewBuffer(nil)
imageID := ""
writeAux := func(msg jsonmessage.JSONMessage) {
if msg.ID == "moby.image.id" {
var result types.BuildResult
if err := json.Unmarshal(*msg.Aux, &result); err != nil {
panic("don't do this in your real code")
}
imageID = result.ID
return
}
return err
}
err := jsonmessage.DisplayJSONMessagesStream(imageBuildResponse.Body, buf, os.Stderr.Fd(), false /* not terminal */, writeAux)
if err != nil {
if jerr, ok := err.(*jsonmessage.JSONError); ok {
// If no error code is set, default to 1
if jerr.Code == 0 {
jerr.Code = 1
}
return fmt.Errorf("error while building image: %s", jerr.Message)
}
}
will write json payloads to stderr like
{"id":"moby.buildkit.trace","aux":"Cn0KR3NoYTI1NjozZThhMzMxYmRkZGFjNWZkYmNjOGVhMDFmYWFhYmM3MjA0MDkwMmYwNjdmYzRhOGY0NDJmMmIzYWVlN2RkNGIyGiRbaW50ZXJuYWxdIGxvYWQgcmVtb3RlIGJ1aWxkIGNvbnRleHQqDAiYw8KaBhCykpCqAg=="}
{"id":"moby.buildkit.trace","aux":"CokBCkdzaGEyNTY6M2U4YTMzMWJkZGRhYzVmZGJjYzhlYTAxZmFhYWJjNzIwNDA5MDJmMDY3ZmM0YThmNDQyZjJiM2FlZTdkZDRiMhokW2ludGVybmFsXSBsb2FkIHJlbW90ZSBidWlsZCBjb250ZXh0KgwImMPCmgYQspKQqgIyCgiZw8KaBhD08F0="}
The base64 strings here don't decode as valid utf-8, and they don't make sense as ISO-8859-1 either. E.g. with a utf-8 console encoding:
$ base64 -d <<<'Cn0KR3NoYTI1NjozZThhMzMxYmRkZGFjNWZkYmNjOGVhMDFmYWFhYmM3MjA0MDkwMmYwNjdmYzRhOGY0NDJmMmIzYWVlN2RkNGIyGiRbaW50ZXJuYWxdIGxvYWQgcmVtb3RlIGJ1aWxkIGNvbnRleHQqDAiYw8KaBhCykpCqAg=='
}
Gsha256:3e8a331bdddac5fdbcc8ea01faaabc72040902f067fc4a8f442f2b3aee7dd4b2�$[internal] load remote build context*
������
It looks like it's probably a struct, but for the life of me I can't find what decodes and processes it.
So of course I find the answer while writing up the SO question...
The writeAux function in build_buildkit.go calls the write method of a tracer instance, and that does the real work. I must've been blind.
The messages are serialized instances of StatusResponse from the github.com/moby/buildkit/api/services/control package. They are unmarshalled from base64-decoded byte sequences and inspected. If you want logs and to skip everything else, just look for instances with non-empty Logs member arrays, e.g. something like this within the above writeAux function:
} else if msg.ID == "moby.buildkit.trace" {
// Process the message like
// https://github.com/docker/docker-ce/blob/523cf7e71252013fbb6a590be67a54b4a88c1dae/components/cli/cli/command/image/build_buildkit.go#L386
// the 'tracer.write' method in build_buildkit.go
var resp controlapi.StatusResponse
var dt []byte
// ignoring all messages that are not understood
if err := json.Unmarshal(*msg.Aux, &dt); err != nil {
return
}
if err := (&resp).Unmarshal(dt); err != nil {
return
}
for _, v := range resp.Vertexes {
fmt.Printf("layer: %+v", v)
}
for _, v := range resp.Statuses {
fmt.Printf("status: %+v", v)
}
for _, v := range resp.Logs {
fmt.Printf("log: msg.Msg)
}
}
The json.Unmarshal and controlapi.StatusResponse.Unmarshal do the base64 decoding and unpacking for you.

After certain load of GEOADD & BRPOP, Redis/Docker responds with errors

I am using go-redis to connect to Redis server running on docker desktop while running my go app straight on my mac.
This my client setup:
package redis
import (
"fmt"
"os"
"github.com/go-redis/redis/v8"
)
var redisClient *RedisClient
type RedisClient struct {
*redis.Client
}
func GetRedisClient() *RedisClient {
if redisClient != nil {
return redisClient
}
host := os.Getenv("REDIS_HOST")
port := os.Getenv("REDIS_PORT")
password := os.Getenv("REDIS_PASS")
client := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%s", host, port),
Password: password, // no password set
DB: 0,
})
redisClient = &RedisClient{
Client: client,
}
return redisClient
}
Docker:
version: "3.8"
services:
redis:
container_name: redis
image: redis:6.2
ports:
- "6379:6379"
ulimits:
nofile:
soft: 65536
hard: 65536
The app will expose websocket connections to drivers that will communicate their current location every second and then save them in Redis using GEOADD.
Also the app will expose another set of websocket connections to the same drivers for general notifications if any using BRPOP.
After 70 driver of websocket connections I get errors from the extra drivers trying to connect. The errors come from the function that saves the location to Redis. The errors I get:
dial tcp [::1]:6379: socket: too many open files
and sometimes dial tcp: lookup localhost: no such host
func (r *RedisClient) SetPoint(ctx context.Context, item *Identifier, loc *Location) error {
geoLocaiton := &redis.GeoLocation{Name: item.Id, Latitude: loc.Lat, Longitude: loc.Lng}
if err := r.GeoAdd(ctx, item.key(), geoLocaiton).Err(); err != nil {
fmt.Println("error adding geo", err)
return errors.New("failed to set point")
}
return nil
}
For general notifications (timeout on the pulling is zero) meaning infinate:
type DriverData struct {
Status OrderStatusType `json:"status,omitempty"`
DriverId uint `json:"driver_id,omitempty"`
UserId uint `json:"user_id,omitempty"`
}
func (config *Config) DriverOrderStatus(c *gin.Context) {
driverID := utils.ToUint(auth.GetToken(c).Subject)
ctx := c.Request.Context()
// order := models.GetOrder(config.Db)
// var _ = order.GetActiveOrderForUser(driverID)
wsconn, err := websocket.Accept(c.Writer, c.Request, &websocket.AcceptOptions{InsecureSkipVerify: true})
if err != nil {
return
}
// if order.ID != 0 {
// var _ = wsjson.Write(ctx, wsconn, &UserData{Order: order, Status: order.Status, Driver: order.Driver})
// } else {
// var _ = wsjson.Write(ctx, wsconn, &UserData{ResetOrder: true})
// }
defer wsconn.Close(websocket.StatusInternalError, "")
closeRead := wsconn.CloseRead(ctx)
driverDataCh := make(chan *DriverData, 1000)
go func() {
loop:
for {
select {
case <-closeRead.Done():
break loop
default:
if status, err := config.Redis.DriverPullStatus(ctx, driverID); err == nil {
driverDataCh <- &DriverData{Status: status.Status, DriverId: status.DriverID, UserId: status.UserID}
}
}
}
fmt.Println("redis pulling data is over")
}()
loop:
for {
select {
case <-closeRead.Done():
break loop
case driverData := <-driverDataCh:
if err := wsjson.Write(ctx, wsconn, driverData); err != nil {
break loop
}
}
}
fmt.Println("sending updates to user is over")
}
This is Redis server info:
127.0.0.1:6379> info
# Server
redis_version:6.2.6
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:a0adc3471b8cfa72
redis_mode:standalone
os:Linux 5.10.47-linuxkit x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:10.3.1
process_id:1
process_supervised:no
run_id:d92005e2ccb89ea8e3be57e3bb1b79e0e323c2a7
tcp_port:6379
server_time_usec:1654937072463352
uptime_in_seconds:325287
uptime_in_days:3
hz:10
configured_hz:10
lru_clock:10769904
executable:/data/redis-server
config_file:
io_threads_active:0
# Clients
connected_clients:104
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:48
client_recent_max_output_buffer:0
blocked_clients:81
tracking_clients:0
clients_in_timeout_table:0
# Memory
used_memory:3081168
used_memory_human:2.94M
used_memory_rss:5791744
used_memory_rss_human:5.52M
used_memory_peak:5895528
used_memory_peak_human:5.62M
used_memory_peak_perc:52.26%
used_memory_overhead:2944804
used_memory_startup:809880
used_memory_dataset:136364
used_memory_dataset_perc:6.00%
allocator_allocated:3166992
allocator_active:3862528
allocator_resident:6742016
total_system_memory:4125036544
total_system_memory_human:3.84G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.22
allocator_frag_bytes:695536
allocator_rss_ratio:1.75
allocator_rss_bytes:2879488
rss_overhead_ratio:0.86
rss_overhead_bytes:-950272
mem_fragmentation_ratio:1.88
mem_fragmentation_bytes:2712392
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:2134588
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0
# Persistence
loading:0
current_cow_size:0
current_cow_size_age:0
current_fork_perc:0.00
current_save_keys_processed:0
current_save_keys_total:0
rdb_changes_since_last_save:3636
rdb_bgsave_in_progress:0
rdb_last_save_time:1654936992
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:450560
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
module_fork_in_progress:0
module_fork_last_cow_size:0
# Stats
total_connections_received:1271
total_commands_processed:296750
instantaneous_ops_per_sec:45
total_net_input_bytes:27751095
total_net_output_bytes:1254190
instantaneous_input_kbps:4.16
instantaneous_output_kbps:12.46
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:18136
evicted_keys:0
keyspace_hits:10
keyspace_misses:3567
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:6453
total_forks:41
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_error_replies:6
dump_payload_sanitizations:0
total_reads_processed:297924
total_writes_processed:295658
io_threaded_reads_processed:0
io_threaded_writes_processed:0
# Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:fc426cf72670e6ad09221bcb9c3423a1e1fab47e
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:428.939381
used_cpu_user:123.850311
used_cpu_sys_children:0.755309
used_cpu_user_children:0.065924
used_cpu_sys_main_thread:428.485425
used_cpu_user_main_thread:123.679341
# Modules
# Errorstats
errorstat_ERR:count=6
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=6,expires=0,avg_ttl=0
After a lot of searching, it turns out that is caused by the max "open file descriptor". Every websocket connection will open a file descriptor. Every machine has a limit. In linux/unix, this is defined under ulimit.
More into that in this article.
In order to update ulimit in mac, refer to this post.

Parse prometheus metrics data to add label and re-parse to prometheus metrics format

In short, I'm writing a program to collect some kubelets metrics remotely.
But these metrics doesn't contain information about node_name so it will duplicate when prometheus scraps it
So I want to parse their metrics, add the node name label to them, then re-parse them into prometheus metrics so I can host as an endpoint to let prometheus scraps from it
But I met problems in parsing metrics
package main
import (
"flag"
"fmt"
"log"
"os"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/expfmt"
)
func fatal(err error) {
if err != nil {
log.Fatalln(err)
}
}
func parseMF(path string) (map[string]*dto.MetricFamily, error) {
reader, err := os.Open(path)
if err != nil {
return nil, err
}
var parser expfmt.TextParser
mf, err := parser.TextToMetricFamilies(reader)
if err != nil {
return nil, err
}
return mf, nil
}
func main() {
f := flag.String("f", "", "set filepath")
flag.Parse()
mf, err := parseMF(*f)
fatal(err)
for k, v := range mf {
fmt.Println("KEY: ", k)
fmt.Println("VAL: ", v)
}
}
With this simple metrics like this:
# HELP net_conntrack_dialer_conn_attempted_total
# TYPE net_conntrack_dialer_conn_attempted_total untyped
net_conntrack_dialer_conn_attempted_total{dialer_name="federate",instance="localhost:9090",job="prometheus"} 1 1608520832877
the result:
name:"net_conntrack_dialer_conn_attempted_total" type:UNTYPED metric:<label:<name:"dialer_name" value:"federate" > label:<name:"instance" value:"localhost:9090" > label:<name:"job" value:"prometheus" > untyped:<value:1 > timestamp_ms:1608520832877 >
I just want to take out the name,labels,value and timestamp but I can't take it out to process, unlike python
from prometheus_client.parser import text_string_to_metric_families
import requests
metrics = """
# HELP net_conntrack_dialer_conn_attempted_total
# TYPE net_conntrack_dialer_conn_attempted_total untyped
net_conntrack_dialer_conn_attempted_total{dialer_name="federate",instance="localhost:9090",job="prometheus"} 1 1608520832877
"""
for family in text_string_to_metric_families(metrics):
for sample in family.samples:
print("{0}\n{1}\n{2}\n{4}".format(*sample))
Python result:
net_conntrack_dialer_conn_attempted_total
{'dialer_name': 'federate', 'instance': 'localhost:9090', 'job': 'prometheus'}
1.0
1608520832.877
So back to the main question: How can I take out those specific value and modify it?

Unable to pull from GCR registry using docker engine-api and long lived JSON file

In the advanced authentication methods documentation for google cloud container registry explains a method for login to the registry using a JSON Key file with the docker cli, this just works fine
$ docker login -u _json_key -p "$(cat keyfile.json)" https://gcr.io
However I'm trying to use that same keyfile.json file to login to the registry using the golang docker/engine-api libraries, I have some working code that this seems to be authenticating fine into other registryies, but always providing a file with the following structure
{
"auths": {
"cr.whatever.com": {
"password": "PASSWORD",
"username": "registry"
}
}
}
By passing that Unmarshal file into ImageBuildOptions function here to then be consumed here
However is not working when using the keyfile.json or a working config.json ...
The docker documentation states that a JSON base64 encoded object with username and password should be used as described here into the Header Parameters section.
I've tried multiple option to produce a file that can be successfully consumed into the docker X-Registry-Config header without much luck...
Any help/hint would be much appreciated.
Thanks!
Thank you for your help jsand, I finally draft a working code function as below
func (d *DockerEngineClient) BuildImage(archive, modelId string, authConfigs map[string]types.AuthConfig) (types.ImageBuildResponse, error) {
buildContext, err := os.Open(archive)
defer buildContext.Close()
c, err := ioutil.ReadFile(os.Getenv("GOOGLE_APPLICATION_CREDENTIALS_FILE"))
if err != nil {
return err
}
var authConfigs2 map[string]types.AuthConfig
authConfigs2 = make(map[string]types.AuthConfig)
authConfigs2["gcr.io"] = types.AuthConfig{
Username: "_json_key",
Password: string(c),
ServerAddress: fmt.Sprintf("https://%s", d.remoteRegistryPrefix),
}
buildOptions := types.ImageBuildOptions{
Tags: []string{fmt.Sprintf("%s/%s", d.remoteRegistryPrefix, modelId)},
AuthConfigs: authConfigs2,
}
var buildResponse types.ImageBuildResponse
buildResponse, err = d.client.ImageBuild(context.TODO(), buildContext, buildOptions)
if err != nil {
return buildResponse, err
}
b, _ := ioutil.ReadAll(buildResponse.Body)
fmt.Printf("%s\n", b)
buildResponse.Body.Close()
return buildResponse, err
}
I think the issue is that 'config' is an overloaded term. Looks like X-Registry-Config contains a base64-encoded version of the JSON-encoded value of options.AuthConfigs, rather than the full docker config file. If you only want to authenticate for https://gcr.io, your JSON input should be:
{
"gcr.io": {
"username": "_json_key",
"password": "{contents of keyfile.json}"
}
}
Or, if you're willing to use Docker's golang libs:
import (
"encoding/base64"
"encoding/json"
"net/http"
"github.com/docker/engine-api/types"
)
def addDockerAuthsHeader(keyfile_contents string, headers http.Header) error {
authConfigs := map[string]types.AuthConfig{
"gcr.io": {
Username: "_json_key",
Password: keyfile_contents
}
}
buf, err := json.Marshal(authConfigs)
if err != nil {
return err
}
headers.Add("X-Registry-Config", base64.URLEncoding.EncodeToString(buf))
return nil
}
Generating keyfile_contents is left as an exercise for the reader :)

Golang - Docker API - parse result of ImagePull

I'm developing a Go script that uses the Docker API for the purposes of my project. After I login to my repository, I pull the Docker image I want, but the problem is that the ImagePull function returns an instance of io.ReadCloser, which I'm only able to pass to the system output via:
io.Copy(os.Stdout, pullResp)
It's cool that I can see the response, but I can't find a decent way to parse it and implement a logic depending on it, which will do some things if a new version of the image have been downloaded, and other things if the image was up to date.
I'll be glad if you share your experience, if you have ever faced this problem.
You can import github.com/docker/docker/pkg/jsonmessage and use both JSONMessage and JSONProgress to decode the stream but it's easier to call
DisplayJSONMessagesToStream: it both parses the stream and displays the messages as text. Here's how you can display the messages using stderr:
reader, err := cli.ImagePull(ctx, myImageRef, types.ImagePullOptions{})
if err != nil {
return err
}
defer reader.Close()
termFd, isTerm := term.GetFdInfo(os.Stderr)
jsonmessage.DisplayJSONMessagesStream(reader, os.Stderr, termFd, isTerm, nil)
The nice thing is that it adapts to the output: it updates the lines if this a TTY (the way docker pull does) but it doesn't if the output is redirected to a file.
#radoslav-stoyanov before use my example do
# docker rmi busybox
then run code
package main
import (
"encoding/json"
"fmt"
"github.com/docker/distribution/context"
docker "github.com/docker/engine-api/client"
"github.com/docker/engine-api/types"
"io"
"strings"
)
func main() {
// DOCKER
cli, err := docker.NewClient("unix:///var/run/docker.sock", "v1.28", nil, map[string]string{"User-Agent": "engine-api-cli-1.0"})
if err != nil {
panic(err)
}
imageName := "busybox:latest"
events, err := cli.ImagePull(context.Background(), imageName, types.ImagePullOptions{})
if err != nil {
panic(err)
}
d := json.NewDecoder(events)
type Event struct {
Status string `json:"status"`
Error string `json:"error"`
Progress string `json:"progress"`
ProgressDetail struct {
Current int `json:"current"`
Total int `json:"total"`
} `json:"progressDetail"`
}
var event *Event
for {
if err := d.Decode(&event); err != nil {
if err == io.EOF {
break
}
panic(err)
}
fmt.Printf("EVENT: %+v\n", event)
}
// Latest event for new image
// EVENT: {Status:Status: Downloaded newer image for busybox:latest Error: Progress:[==================================================>] 699.2kB/699.2kB ProgressDetail:{Current:699243 Total:699243}}
// Latest event for up-to-date image
// EVENT: {Status:Status: Image is up to date for busybox:latest Error: Progress: ProgressDetail:{Current:0 Total:0}}
if event != nil {
if strings.Contains(event.Status, fmt.Sprintf("Downloaded newer image for %s", imageName)) {
// new
fmt.Println("new")
}
if strings.Contains(event.Status, fmt.Sprintf("Image is up to date for %s", imageName)) {
// up-to-date
fmt.Println("up-to-date")
}
}
}
You can see API formats to create your structures (like my Event) to read them here https://docs.docker.com/engine/api/v1.27/#operation/ImageCreate
I hope it helps you solve your problem, thanks.
I have used similar approach for my purpose (not a moby client). Typically idea is same for reading stream response. Give it a try and implement yours.
Reading stream response of any response type:
reader := bufio.NewReader(pullResp)
defer pullResp.Close() // pullResp is io.ReadCloser
var resp bytes.Buffer
for {
line, err := reader.ReadBytes('\n')
if err != nil {
// it could be EOF or read error
// handle it
break
}
resp.Write(line)
resp.WriteByte('\n')
}
// print it
fmt.Println(resp.String())
However your sample response in the comment seems valid JSON structure. The json.Decoder is best way to read JSON stream. This is just an idea-
type ImagePullResponse struct {
ID string `json"id"`
Status string `json:"status"`
ProgressDetail struct {
Current int64 `json:"current"`
Total int64 `json:"total"`
} `json:"progressDetail"`
Progress string `json:"progress"`
}
And do
d := json.NewDecoder(pullResp)
for {
var pullResult ImagePullResponse
if err := d.Decode(&pullResult); err != nil {
// handle the error
break
}
fmt.Println(pullResult)
}

Resources