How to use Pa_IsStreamActive in PortAudio? - portaudio

I use PortAudio V19-devel (pa_stable_v19_20140130) to synthesize the output of a text-to-speech engine.
First, I've registered a function using PaStreamFinishedCallback (void *userData), so that it gets called when the audio stream becomes inactive.
According to the Stream State Machine, while beeing in state "Active" (in other words beeing in the callback for processing audio data) and returning paAbort, we trigger a transition to state "Callback finished" and Pa_IsStreamActive returns 0, after all buffers have been canceled.
Unfortunately I don't know how to do this right, since Pa_IsStreamActive returns 1 after an paAbort. The question is how to determine that all buffers have been canceled? Below you can find the corresponding snippets of my code:
int AudioConnection::onAudioDataReceived (const void *input, void *output,
unsigned long frames, const PaStreamCallbackTimeInfo *time,
PaStreamCallbackFlags status, void *userdata
) {
int finished;
unsigned int i;
AudioConnection *data = (AudioConnection *)userdata;
unsigned int framesLeft = data->numFrames - data->cursor;
int8_t *out = (int8_t *)output;
// Declared to prevent unused variable warnings
(void) time; (void) input; (void) status;
if (framesLeft >= frames) {
for (i = 0; i < frames; i++) {
*out++ = data->audioSamples[data->cursor++];
*out++ = data->audioSamples[data->cursor++];
}
data->cursor += frames;
finished = paContinue;
}
else if (framesLeft == 0) {
*out++ = 0;
*out++ = 0;
data->cursor = 0;
finished = paAbort;
}
// final buffer
else if (framesLeft < frames) {
for (i = 0; i < framesLeft; i++) {
*out++ = data->audioSamples[data->cursor++];
*out++ = data->audioSamples[data->cursor++];
}
data->cursor = 0;
finished = paComplete;
}
// should never happen
else {
finished = paAbort;
}
return finished;
}
This is my callback code, that gets called after audio stream has finished:
void AudioConnection::onAudioStreamFinished (void *userdata) {
AudioConnection *data = (AudioConnection *) userdata;
ACE_DEBUG ((LM_TRACE, ACE_TEXT ("(%t | %P | %D | %N) AudioConnection::onAudioStreamFinished ()\n")));
ACE_DEBUG ((LM_TRACE, ACE_TEXT ("(%t | %P | %D | %N) AudioConnection::isAudioStreamActive () = %d \n"), data->isAudioStreamActive ()));
}
And
bool AudioConnection::isAudioStreamActive () {
return Pa_IsStreamActive (audioStream) ? true : false;
}
The following trace indicates the problem- I would expect that isAudioStreamActive in the last line returns 0. Any feedback on this is appreciated. Thanks!
(1987417168 | 19339 | 2016-09-27 23:10:30.935040 | ../src/AudioConnection.cpp) AudioConnection::isAudioStreamActive () = 0
(1987417168 | 19339 | 2016-09-27 23:10:30.936162 | ../src/AudioConnection.cpp) AudioConnection::playAudioStream ()
(1954542672 | 19339 | 2016-09-27 23:10:30.977247 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 27648
(1954542672 | 19339 | 2016-09-27 23:10:31.007114 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 26112
(1954542672 | 19339 | 2016-09-27 23:10:31.037122 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 24576
(1954542672 | 19339 | 2016-09-27 23:10:31.067106 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 23040
(1954542672 | 19339 | 2016-09-27 23:10:31.097107 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 21504
(1954542672 | 19339 | 2016-09-27 23:10:31.137147 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 19968
(1954542672 | 19339 | 2016-09-27 23:10:31.187113 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 18432
(1954542672 | 19339 | 2016-09-27 23:10:31.237124 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 16896
(1954542672 | 19339 | 2016-09-27 23:10:31.287146 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 15360
(1954542672 | 19339 | 2016-09-27 23:10:31.337112 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 13824
(1954542672 | 19339 | 2016-09-27 23:10:31.387106 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 12288
(1954542672 | 19339 | 2016-09-27 23:10:31.437225 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 10752
(1954542672 | 19339 | 2016-09-27 23:10:31.487161 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 9216
(1954542672 | 19339 | 2016-09-27 23:10:31.537158 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 7680
(1954542672 | 19339 | 2016-09-27 23:10:31.587206 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 6144
(1954542672 | 19339 | 2016-09-27 23:10:31.637156 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 4608
(1954542672 | 19339 | 2016-09-27 23:10:31.687188 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 3072
(1954542672 | 19339 | 2016-09-27 23:10:31.737236 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 1536
(1954542672 | 19339 | 2016-09-27 23:10:31.787165 | ../src/AudioConnection.cpp) AudioConnection framesLeft: 0 .......silence.
(1954542672 | 19339 | 2016-09-27 23:10:31.787428 | ../src/AudioConnection.cpp) AudioConnection::onAudioStreamFinished ()
(1954542672 | 19339 | 2016-09-27 23:10:31.787736 | ../src/AudioConnection.cpp) AudioConnection::isAudioStreamActive () = 1

The result of Pa_IsStreamActive depends on which host you use. It's not so surprising that it returns 1 after paAbort since that means something anomalous occurred and PortAudio should quit immediately. Most likely in this case you want to use paComplete instead. I think you want framesLeft <= frames for "final buffer".

Related

Pytorch model Parameters changes in CPU and GPU

I have created the model and save the weights using google colab. Now I have created a prediction script.
The prediction script contains the model class. I am trying to load the model weights using the following method-
Saving & Loading Model Across Devices
Save on GPU, Load on CPU
Save:
torch.save(model.state_dict(), PATH)
Load:
device = torch.device('cpu')
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=device))
The above method should work, right? Yes.
But when I am trying to do so I have different parameters of the model in Google Colab (Prediction, runtime-None, device=CPU) and different in my local machine (prediction, device=cpu)
Model Params in Colab-
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
The model has 12,490,234 trainable parameters
+-------------------------------------------------------+------------+
| Modules | Parameters |
+-------------------------------------------------------+------------+
| encoder.tok_embedding.weight | 2053376 |
| encoder.pos_embedding.weight | 25600 |
| encoder.layers.0.self_attn_layer_norm.weight | 256 |
| encoder.layers.0.self_attn_layer_norm.bias | 256 |
| encoder.layers.0.ff_layer_norm.weight | 256 |
| encoder.layers.0.ff_layer_norm.bias | 256 |
| encoder.layers.0.self_attention.fc_q.weight | 65536 |
| encoder.layers.0.self_attention.fc_q.bias | 256 |
| encoder.layers.0.self_attention.fc_k.weight | 65536 |
| encoder.layers.0.self_attention.fc_k.bias | 256 |
| encoder.layers.0.self_attention.fc_v.weight | 65536 |
| encoder.layers.0.self_attention.fc_v.bias | 256 |
| encoder.layers.0.self_attention.fc_o.weight | 65536 |
| encoder.layers.0.self_attention.fc_o.bias | 256 |
| encoder.layers.0.positionwise_feedforward.fc_1.weight | 131072 |
| encoder.layers.0.positionwise_feedforward.fc_1.bias | 512 |
| encoder.layers.0.positionwise_feedforward.fc_2.weight | 131072 |
| encoder.layers.0.positionwise_feedforward.fc_2.bias | 256 |
| encoder.layers.1.self_attn_layer_norm.weight | 256 |
| encoder.layers.1.self_attn_layer_norm.bias | 256 |
| encoder.layers.1.ff_layer_norm.weight | 256 |
| encoder.layers.1.ff_layer_norm.bias | 256 |
| encoder.layers.1.self_attention.fc_q.weight | 65536 |
| encoder.layers.1.self_attention.fc_q.bias | 256 |
| encoder.layers.1.self_attention.fc_k.weight | 65536 |
| encoder.layers.1.self_attention.fc_k.bias | 256 |
| encoder.layers.1.self_attention.fc_v.weight | 65536 |
| encoder.layers.1.self_attention.fc_v.bias | 256 |
| encoder.layers.1.self_attention.fc_o.weight | 65536 |
| encoder.layers.1.self_attention.fc_o.bias | 256 |
| encoder.layers.1.positionwise_feedforward.fc_1.weight | 131072 |
| encoder.layers.1.positionwise_feedforward.fc_1.bias | 512 |
| encoder.layers.1.positionwise_feedforward.fc_2.weight | 131072 |
| encoder.layers.1.positionwise_feedforward.fc_2.bias | 256 |
| encoder.layers.2.self_attn_layer_norm.weight | 256 |
| encoder.layers.2.self_attn_layer_norm.bias | 256 |
| encoder.layers.2.ff_layer_norm.weight | 256 |
| encoder.layers.2.ff_layer_norm.bias | 256 |
| encoder.layers.2.self_attention.fc_q.weight | 65536 |
| encoder.layers.2.self_attention.fc_q.bias | 256 |
| encoder.layers.2.self_attention.fc_k.weight | 65536 |
| encoder.layers.2.self_attention.fc_k.bias | 256 |
| encoder.layers.2.self_attention.fc_v.weight | 65536 |
| encoder.layers.2.self_attention.fc_v.bias | 256 |
| encoder.layers.2.self_attention.fc_o.weight | 65536 |
| encoder.layers.2.self_attention.fc_o.bias | 256 |
| encoder.layers.2.positionwise_feedforward.fc_1.weight | 131072 |
| encoder.layers.2.positionwise_feedforward.fc_1.bias | 512 |
| encoder.layers.2.positionwise_feedforward.fc_2.weight | 131072 |
| encoder.layers.2.positionwise_feedforward.fc_2.bias | 256 |
| decoder.tok_embedding.weight | 3209728 |
| decoder.pos_embedding.weight | 25600 |
| decoder.layers.0.self_attn_layer_norm.weight | 256 |
| decoder.layers.0.self_attn_layer_norm.bias | 256 |
| decoder.layers.0.enc_attn_layer_norm.weight | 256 |
| decoder.layers.0.enc_attn_layer_norm.bias | 256 |
| decoder.layers.0.ff_layer_norm.weight | 256 |
| decoder.layers.0.ff_layer_norm.bias | 256 |
| decoder.layers.0.self_attention.fc_q.weight | 65536 |
| decoder.layers.0.self_attention.fc_q.bias | 256 |
| decoder.layers.0.self_attention.fc_k.weight | 65536 |
| decoder.layers.0.self_attention.fc_k.bias | 256 |
| decoder.layers.0.self_attention.fc_v.weight | 65536 |
| decoder.layers.0.self_attention.fc_v.bias | 256 |
| decoder.layers.0.self_attention.fc_o.weight | 65536 |
| decoder.layers.0.self_attention.fc_o.bias | 256 |
| decoder.layers.0.encoder_attention.fc_q.weight | 65536 |
| decoder.layers.0.encoder_attention.fc_q.bias | 256 |
| decoder.layers.0.encoder_attention.fc_k.weight | 65536 |
| decoder.layers.0.encoder_attention.fc_k.bias | 256 |
| decoder.layers.0.encoder_attention.fc_v.weight | 65536 |
| decoder.layers.0.encoder_attention.fc_v.bias | 256 |
| decoder.layers.0.encoder_attention.fc_o.weight | 65536 |
| decoder.layers.0.encoder_attention.fc_o.bias | 256 |
| decoder.layers.0.positionwise_feedforward.fc_1.weight | 131072 |
| decoder.layers.0.positionwise_feedforward.fc_1.bias | 512 |
| decoder.layers.0.positionwise_feedforward.fc_2.weight | 131072 |
| decoder.layers.0.positionwise_feedforward.fc_2.bias | 256 |
| decoder.layers.1.self_attn_layer_norm.weight | 256 |
| decoder.layers.1.self_attn_layer_norm.bias | 256 |
| decoder.layers.1.enc_attn_layer_norm.weight | 256 |
| decoder.layers.1.enc_attn_layer_norm.bias | 256 |
| decoder.layers.1.ff_layer_norm.weight | 256 |
| decoder.layers.1.ff_layer_norm.bias | 256 |
| decoder.layers.1.self_attention.fc_q.weight | 65536 |
| decoder.layers.1.self_attention.fc_q.bias | 256 |
| decoder.layers.1.self_attention.fc_k.weight | 65536 |
| decoder.layers.1.self_attention.fc_k.bias | 256 |
| decoder.layers.1.self_attention.fc_v.weight | 65536 |
| decoder.layers.1.self_attention.fc_v.bias | 256 |
| decoder.layers.1.self_attention.fc_o.weight | 65536 |
| decoder.layers.1.self_attention.fc_o.bias | 256 |
| decoder.layers.1.encoder_attention.fc_q.weight | 65536 |
| decoder.layers.1.encoder_attention.fc_q.bias | 256 |
| decoder.layers.1.encoder_attention.fc_k.weight | 65536 |
| decoder.layers.1.encoder_attention.fc_k.bias | 256 |
| decoder.layers.1.encoder_attention.fc_v.weight | 65536 |
| decoder.layers.1.encoder_attention.fc_v.bias | 256 |
| decoder.layers.1.encoder_attention.fc_o.weight | 65536 |
| decoder.layers.1.encoder_attention.fc_o.bias | 256 |
| decoder.layers.1.positionwise_feedforward.fc_1.weight | 131072 |
| decoder.layers.1.positionwise_feedforward.fc_1.bias | 512 |
| decoder.layers.1.positionwise_feedforward.fc_2.weight | 131072 |
| decoder.layers.1.positionwise_feedforward.fc_2.bias | 256 |
| decoder.layers.2.self_attn_layer_norm.weight | 256 |
| decoder.layers.2.self_attn_layer_norm.bias | 256 |
| decoder.layers.2.enc_attn_layer_norm.weight | 256 |
| decoder.layers.2.enc_attn_layer_norm.bias | 256 |
| decoder.layers.2.ff_layer_norm.weight | 256 |
| decoder.layers.2.ff_layer_norm.bias | 256 |
| decoder.layers.2.self_attention.fc_q.weight | 65536 |
| decoder.layers.2.self_attention.fc_q.bias | 256 |
| decoder.layers.2.self_attention.fc_k.weight | 65536 |
| decoder.layers.2.self_attention.fc_k.bias | 256 |
| decoder.layers.2.self_attention.fc_v.weight | 65536 |
| decoder.layers.2.self_attention.fc_v.bias | 256 |
| decoder.layers.2.self_attention.fc_o.weight | 65536 |
| decoder.layers.2.self_attention.fc_o.bias | 256 |
| decoder.layers.2.encoder_attention.fc_q.weight | 65536 |
| decoder.layers.2.encoder_attention.fc_q.bias | 256 |
| decoder.layers.2.encoder_attention.fc_k.weight | 65536 |
| decoder.layers.2.encoder_attention.fc_k.bias | 256 |
| decoder.layers.2.encoder_attention.fc_v.weight | 65536 |
| decoder.layers.2.encoder_attention.fc_v.bias | 256 |
| decoder.layers.2.encoder_attention.fc_o.weight | 65536 |
| decoder.layers.2.encoder_attention.fc_o.bias | 256 |
| decoder.layers.2.positionwise_feedforward.fc_1.weight | 131072 |
| decoder.layers.2.positionwise_feedforward.fc_1.bias | 512 |
| decoder.layers.2.positionwise_feedforward.fc_2.weight | 131072 |
| decoder.layers.2.positionwise_feedforward.fc_2.bias | 256 |
| decoder.fc_out.weight | 3209728 |
| decoder.fc_out.bias | 12538 |
+-------------------------------------------------------+------------+
Total Trainable Params: 12490234
Model Params in Local-
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
The model has 12,506,137 trainable parameters
+-------------------------------------------------------+------------+
| Modules | Parameters |
+-------------------------------------------------------+------------+
| encoder.tok_embedding.weight | 2053376 |
| encoder.pos_embedding.weight | 25600 |
| encoder.layers.0.self_attn_layer_norm.weight | 256 |
| encoder.layers.0.self_attn_layer_norm.bias | 256 |
| encoder.layers.0.ff_layer_norm.weight | 256 |
| encoder.layers.0.ff_layer_norm.bias | 256 |
| encoder.layers.0.self_attention.fc_q.weight | 65536 |
| encoder.layers.0.self_attention.fc_q.bias | 256 |
| encoder.layers.0.self_attention.fc_k.weight | 65536 |
| encoder.layers.0.self_attention.fc_k.bias | 256 |
| encoder.layers.0.self_attention.fc_v.weight | 65536 |
| encoder.layers.0.self_attention.fc_v.bias | 256 |
| encoder.layers.0.self_attention.fc_o.weight | 65536 |
| encoder.layers.0.self_attention.fc_o.bias | 256 |
| encoder.layers.0.positionwise_feedforward.fc_1.weight | 131072 |
| encoder.layers.0.positionwise_feedforward.fc_1.bias | 512 |
| encoder.layers.0.positionwise_feedforward.fc_2.weight | 131072 |
| encoder.layers.0.positionwise_feedforward.fc_2.bias | 256 |
| encoder.layers.1.self_attn_layer_norm.weight | 256 |
| encoder.layers.1.self_attn_layer_norm.bias | 256 |
| encoder.layers.1.ff_layer_norm.weight | 256 |
| encoder.layers.1.ff_layer_norm.bias | 256 |
| encoder.layers.1.self_attention.fc_q.weight | 65536 |
| encoder.layers.1.self_attention.fc_q.bias | 256 |
| encoder.layers.1.self_attention.fc_k.weight | 65536 |
| encoder.layers.1.self_attention.fc_k.bias | 256 |
| encoder.layers.1.self_attention.fc_v.weight | 65536 |
| encoder.layers.1.self_attention.fc_v.bias | 256 |
| encoder.layers.1.self_attention.fc_o.weight | 65536 |
| encoder.layers.1.self_attention.fc_o.bias | 256 |
| encoder.layers.1.positionwise_feedforward.fc_1.weight | 131072 |
| encoder.layers.1.positionwise_feedforward.fc_1.bias | 512 |
| encoder.layers.1.positionwise_feedforward.fc_2.weight | 131072 |
| encoder.layers.1.positionwise_feedforward.fc_2.bias | 256 |
| encoder.layers.2.self_attn_layer_norm.weight | 256 |
| encoder.layers.2.self_attn_layer_norm.bias | 256 |
| encoder.layers.2.ff_layer_norm.weight | 256 |
| encoder.layers.2.ff_layer_norm.bias | 256 |
| encoder.layers.2.self_attention.fc_q.weight | 65536 |
| encoder.layers.2.self_attention.fc_q.bias | 256 |
| encoder.layers.2.self_attention.fc_k.weight | 65536 |
| encoder.layers.2.self_attention.fc_k.bias | 256 |
| encoder.layers.2.self_attention.fc_v.weight | 65536 |
| encoder.layers.2.self_attention.fc_v.bias | 256 |
| encoder.layers.2.self_attention.fc_o.weight | 65536 |
| encoder.layers.2.self_attention.fc_o.bias | 256 |
| encoder.layers.2.positionwise_feedforward.fc_1.weight | 131072 |
| encoder.layers.2.positionwise_feedforward.fc_1.bias | 512 |
| encoder.layers.2.positionwise_feedforward.fc_2.weight | 131072 |
| encoder.layers.2.positionwise_feedforward.fc_2.bias | 256 |
| decoder.tok_embedding.weight | 3217664 |
| decoder.pos_embedding.weight | 25600 |
| decoder.layers.0.self_attn_layer_norm.weight | 256 |
| decoder.layers.0.self_attn_layer_norm.bias | 256 |
| decoder.layers.0.enc_attn_layer_norm.weight | 256 |
| decoder.layers.0.enc_attn_layer_norm.bias | 256 |
| decoder.layers.0.ff_layer_norm.weight | 256 |
| decoder.layers.0.ff_layer_norm.bias | 256 |
| decoder.layers.0.self_attention.fc_q.weight | 65536 |
| decoder.layers.0.self_attention.fc_q.bias | 256 |
| decoder.layers.0.self_attention.fc_k.weight | 65536 |
| decoder.layers.0.self_attention.fc_k.bias | 256 |
| decoder.layers.0.self_attention.fc_v.weight | 65536 |
| decoder.layers.0.self_attention.fc_v.bias | 256 |
| decoder.layers.0.self_attention.fc_o.weight | 65536 |
| decoder.layers.0.self_attention.fc_o.bias | 256 |
| decoder.layers.0.encoder_attention.fc_q.weight | 65536 |
| decoder.layers.0.encoder_attention.fc_q.bias | 256 |
| decoder.layers.0.encoder_attention.fc_k.weight | 65536 |
| decoder.layers.0.encoder_attention.fc_k.bias | 256 |
| decoder.layers.0.encoder_attention.fc_v.weight | 65536 |
| decoder.layers.0.encoder_attention.fc_v.bias | 256 |
| decoder.layers.0.encoder_attention.fc_o.weight | 65536 |
| decoder.layers.0.encoder_attention.fc_o.bias | 256 |
| decoder.layers.0.positionwise_feedforward.fc_1.weight | 131072 |
| decoder.layers.0.positionwise_feedforward.fc_1.bias | 512 |
| decoder.layers.0.positionwise_feedforward.fc_2.weight | 131072 |
| decoder.layers.0.positionwise_feedforward.fc_2.bias | 256 |
| decoder.layers.1.self_attn_layer_norm.weight | 256 |
| decoder.layers.1.self_attn_layer_norm.bias | 256 |
| decoder.layers.1.enc_attn_layer_norm.weight | 256 |
| decoder.layers.1.enc_attn_layer_norm.bias | 256 |
| decoder.layers.1.ff_layer_norm.weight | 256 |
| decoder.layers.1.ff_layer_norm.bias | 256 |
| decoder.layers.1.self_attention.fc_q.weight | 65536 |
| decoder.layers.1.self_attention.fc_q.bias | 256 |
| decoder.layers.1.self_attention.fc_k.weight | 65536 |
| decoder.layers.1.self_attention.fc_k.bias | 256 |
| decoder.layers.1.self_attention.fc_v.weight | 65536 |
| decoder.layers.1.self_attention.fc_v.bias | 256 |
| decoder.layers.1.self_attention.fc_o.weight | 65536 |
| decoder.layers.1.self_attention.fc_o.bias | 256 |
| decoder.layers.1.encoder_attention.fc_q.weight | 65536 |
| decoder.layers.1.encoder_attention.fc_q.bias | 256 |
| decoder.layers.1.encoder_attention.fc_k.weight | 65536 |
| decoder.layers.1.encoder_attention.fc_k.bias | 256 |
| decoder.layers.1.encoder_attention.fc_v.weight | 65536 |
| decoder.layers.1.encoder_attention.fc_v.bias | 256 |
| decoder.layers.1.encoder_attention.fc_o.weight | 65536 |
| decoder.layers.1.encoder_attention.fc_o.bias | 256 |
| decoder.layers.1.positionwise_feedforward.fc_1.weight | 131072 |
| decoder.layers.1.positionwise_feedforward.fc_1.bias | 512 |
| decoder.layers.1.positionwise_feedforward.fc_2.weight | 131072 |
| decoder.layers.1.positionwise_feedforward.fc_2.bias | 256 |
| decoder.layers.2.self_attn_layer_norm.weight | 256 |
| decoder.layers.2.self_attn_layer_norm.bias | 256 |
| decoder.layers.2.enc_attn_layer_norm.weight | 256 |
| decoder.layers.2.enc_attn_layer_norm.bias | 256 |
| decoder.layers.2.ff_layer_norm.weight | 256 |
| decoder.layers.2.ff_layer_norm.bias | 256 |
| decoder.layers.2.self_attention.fc_q.weight | 65536 |
| decoder.layers.2.self_attention.fc_q.bias | 256 |
| decoder.layers.2.self_attention.fc_k.weight | 65536 |
| decoder.layers.2.self_attention.fc_k.bias | 256 |
| decoder.layers.2.self_attention.fc_v.weight | 65536 |
| decoder.layers.2.self_attention.fc_v.bias | 256 |
| decoder.layers.2.self_attention.fc_o.weight | 65536 |
| decoder.layers.2.self_attention.fc_o.bias | 256 |
| decoder.layers.2.encoder_attention.fc_q.weight | 65536 |
| decoder.layers.2.encoder_attention.fc_q.bias | 256 |
| decoder.layers.2.encoder_attention.fc_k.weight | 65536 |
| decoder.layers.2.encoder_attention.fc_k.bias | 256 |
| decoder.layers.2.encoder_attention.fc_v.weight | 65536 |
| decoder.layers.2.encoder_attention.fc_v.bias | 256 |
| decoder.layers.2.encoder_attention.fc_o.weight | 65536 |
| decoder.layers.2.encoder_attention.fc_o.bias | 256 |
| decoder.layers.2.positionwise_feedforward.fc_1.weight | 131072 |
| decoder.layers.2.positionwise_feedforward.fc_1.bias | 512 |
| decoder.layers.2.positionwise_feedforward.fc_2.weight | 131072 |
| decoder.layers.2.positionwise_feedforward.fc_2.bias | 256 |
| decoder.fc_out.weight | 3217664 |
| decoder.fc_out.bias | 12569 |
+-------------------------------------------------------+------------+
Total Trainable Params: 12506137
So, that's why I am unable to load the model. Because the model has a different parameter in local.
Even if I try to load the weights in local it gives me-
model.load_state_dict(torch.load(f"{model_name}.pt", map_location=device))
Error-
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-24-f5baac4441a5> in <module>
----> 1 model.load_state_dict(torch.load(f"{model_name}_2.pt", map_location=device))
c:\anaconda\envs\lang_trans\lib\site-packages\torch\nn\modules\module.py in load_state_dict(self, state_dict, strict)
845 if len(error_msgs) > 0:
846 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
--> 847 self.__class__.__name__, "\n\t".join(error_msgs)))
848 return _IncompatibleKeys(missing_keys, unexpected_keys)
849
RuntimeError: Error(s) in loading state_dict for Seq2Seq: size mismatch for decoder.tok_embedding.weight: copying a param with shape torch.Size([12538, 256]) from checkpoint, the shape in current model is torch.Size([12569, 256]). size mismatch for decoder.fc_out.weight: copying a param with shape torch.Size([12538, 256]) from checkpoint, the shape in current model is torch.Size([12569, 256]). size mismatch for decoder.fc_out.bias: copying a param with shape torch.Size([12538]) from checkpoint, the shape in current model is torch.Size([12569]).
The model param of the local must be wrong because in colab (device=CPU, runtime=None) I am able to load the weights after defining model class. But in the local machine the params changes, so I am unable to load the weights. I know it's weird, help me to find the solution.
You can check the full code of the model here-
<script src="https://gist.github.com/Dipeshpal/90c715a7b7f00845e20ef998bda35835.js"></script>
https://gist.github.com/Dipeshpal/90c715a7b7f00845e20ef998bda35835
After this model params change.

UIAlertControllerInterfaceActionGroupView AMBIGUOUS LAYOUT

I am trying to figure out a constraint issue when presenting my UIAlertController.
2019-04-26 11:40:42.140007+0100 MyAPP[12811:3974220] [LayoutConstraints] Unable to simultaneously satisfy constraints.
Probably at least one of the constraints in the following list is one you don't want.
Try this:
(1) look at each constraint and try to figure out which you don't expect;
(2) find the code that added the unwanted constraint or constraints and fix it.
(
""
)
Will attempt to recover by breaking constraint
Make a symbolic breakpoint at UIViewAlertForUnsatisfiableConstraints to catch this in the debugger.
The methods in the UIConstraintBasedLayoutDebugging category on UIView listed in may also be helpful.
I create and present the UIAlertController using the following function
#discardableResult
static func showOptionsModal(withOptions options:[String], sender:UIViewController ,holderView:UIView, arrowPosition:UIPopoverArrowDirection, completion: ((_ option:Int) -> ())?)->UIAlertController {
let moreActionsCellSheetController = UIAlertController(title:nil, message: nil,preferredStyle: .actionSheet)
for (i, option) in options.enumerated(){
let optionAction = UIAlertAction(title:option, style: .default){ (_) in
completion?(i)
}
moreActionsCellSheetController.addAction(optionAction)
}
if UIDevice.current.userInterfaceIdiom != .pad {
let actionCancel = UIAlertAction(title:printLocalized(withKey: "messages.cancel", targetSpecific: false), style: .destructive){ (_) in
completion?(-1)
}
moreActionsCellSheetController.addAction(actionCancel)
}
moreActionsCellSheetController.view.tintColor = UIColor.black
moreActionsCellSheetController.popoverPresentationController?.sourceView = holderView
moreActionsCellSheetController.popoverPresentationController?.sourceRect = holderView.frame
switch arrowPosition {
case .up:
moreActionsCellSheetController.popoverPresentationController?.sourceRect.origin.y = 0
moreActionsCellSheetController.popoverPresentationController?.sourceRect.origin.x = (-holderView.frame.width / 8)
moreActionsCellSheetController.popoverPresentationController?.permittedArrowDirections = .up
case .right:
moreActionsCellSheetController.popoverPresentationController?.sourceRect.origin.y = 0
moreActionsCellSheetController.popoverPresentationController?.sourceRect.origin.x = 0
moreActionsCellSheetController.popoverPresentationController?.permittedArrowDirections = .right
case .down:
moreActionsCellSheetController.popoverPresentationController?.sourceRect.origin.y = holderView.bounds.width/2
moreActionsCellSheetController.popoverPresentationController?.permittedArrowDirections = .down
case .left:
moreActionsCellSheetController.popoverPresentationController?.sourceRect.origin.y = holderView.bounds.height/2
moreActionsCellSheetController.popoverPresentationController?.permittedArrowDirections = .left
default:
moreActionsCellSheetController.popoverPresentationController?.sourceRect.origin.y = 0
moreActionsCellSheetController.popoverPresentationController?.sourceRect.origin.x = 0
moreActionsCellSheetController.popoverPresentationController?.permittedArrowDirections = .any
}
sender.present(moreActionsCellSheetController,animated: true, completion: nil)
return moreActionsCellSheetController
}
I am added a symbolic breakpoint and added an action like so:
The autolayoutTrace shows the following result:
•UIWindow:0x159d197b0 - AMBIGUOUS LAYOUT
| •UIView:0x159d3fbf0
| | *<UILayoutGuide: 0x2817eda40 - "UIViewSafeAreaLayoutGuide", layoutFrame = {{0, 44}, {375, 734}}, owningView = <UIView: 0x159d3fbf0; frame = (0 0; 375 812); autoresize = W+H; tintColor = UIExtendedSRGBColorSpace 0.176471 0.176471 0.176471 1; layer = <CALayer: 0x282ec41a0>>>
| | *UIView:0x159d367d0
| | | UILayoutContainerView:0x159d580b0
| | | | UINavigationTransitionView:0x159d5a760
| | | | | UIViewControllerWrapperView:0x159d4e880
| | | | | | •UIView:0x15b403b70
| | | | | | | *<UILayoutGuide: 0x2817fd5e0 - "UIViewSafeAreaLayoutGuide", layoutFrame = {{0, 0}, {375, 640}}, owningView = <UIView: 0x15b403b70; frame = (0 44; 375 640); autoresize = W+H; tintColor = UIExtendedSRGBColorSpace 0.176471 0.176471 0.176471 1; layer = <CALayer: 0x282e8ad20>>>
| | | | | | | *DAT_Air_Vinyl.MainLabel:0x15b503030'Select device to connect ...'
| | | | | | | *UIScrollView:0x15c014c00
| | | | | | | | *<_UIScrollViewContentOffsetGuide: 0x2817ecfc0 - "UIScrollView-contentOffsetLayoutGuide", layoutFrame = {{0, 0}, {0, 0}}, owningView = <UIScrollView: 0x15c014c00; frame = (0 0; 375 640); clipsToBounds = YES; hidden = YES; autoresize = RM+BM; gestureRecognizers = <NSArray: 0x282079e00>; layer = <CALayer: 0x282e8ce00>; contentOffset: {0, 0}; contentSize: {375, 746}; adjustedContentInset: {0, 0, 0, 0}>>
| | | | | | | | *DAT_Air_Vinyl.RecordHeaderView:0x15b503320
| | | | | | | | | *DAT_Air_Vinyl.RecordHeaderView:0x15b406010
| | | | | | | | | | *<UILayoutGuide: 0x2817e5a40 - "UIViewSafeAreaLayoutGuide", layoutFrame = {{0, 0}, {375, 150}}, owningView = <DAT_Air_Vinyl.RecordHeaderView: 0x15b406010; frame = (0 0; 375 150); clipsToBounds = YES; autoresize = RM+BM; layer = <CALayer: 0x282e8b120>>>
| | | | | | | | | | *UIView:0x15b406240
| | | | | | | | | | | *DAT_Air_Vinyl.MainButton:0x15b406420'Enable'
| | | | | | | | | | | | UIImageView:0x159d74e80
| | | | | | | | | | | | UIButtonLabel:0x15b406950'Enable'
| | | | | | | | | | | *DAT_Air_Vinyl.MainView:0x15b4070b0
| | | | | | | | | | | | *DAT_Air_Vinyl.MainLabel:0x15b4075f0'Gain:'
| | | | | | | | | | | | *DAT_Air_Vinyl.SelectorDropDownView:0x15b407ae0
| | | | | | | | | | | | | *UIImageView:0x15b604f90
| | | | | | | | | | | | | *DAT_Air_Vinyl.MainLabel:0x15b60aa20'0 dB'
| | | | | | | | | | | *DAT_Air_Vinyl.MainButton:0x15b4041a0'Clear all'
| | | | | | | | | | | | UIImageView:0x159d79740
| | | | | | | | | | | | UIButtonLabel:0x15b4046d0'Clear all'
| | | | | | | | | | *UIView:0x15b508ed0
| | | | | | | | | *UIView:0x15b50d250
| | | | | | | | *DAT_Air_Vinyl.MainTableView:0x15c025c00
| | | | | | | | | UIView:0x15b5084d0
| | | | | | | | | UIImageView:0x159d767c0
| | | | | | | | | UIImageView:0x159d769f0
| | | | | | | | UIImageView:0x159d76f70
| | | | | | | | UIImageView:0x159d76d40
| | | | | | | *_TtCC13DAT_Air_Vinyl20SelectorDropDownView12DropDownView:0x15b509420
| | | | | | | | *DAT_Air_Vinyl.MainTableView:0x15c026800
| | | | | | | | | UIView:0x15b50a210
| | | | | | | | | UIImageView:0x159d7a740
| | | | UINavigationBar:0x159d582b0
| | | | | _UIBarBackground:0x159d58770
| | | | | | UIImageView:0x159d58c00
| | | | | _UINavigationBarLargeTitleView:0x159d59880
| | | | | | UILabel:0x159d59dc0
| | | | | •_UINavigationBarContentView:0x159d59260
| | | | | | *<UILayoutGuide: 0x2817ee060 - "BackButtonGuide(0x159d59720)", layoutFrame = {{0, 0}, {8, 44}}, owningView = <_UINavigationBarContentView: 0x159d59260; frame = (0 0; 375 44); layer = <CALayer: 0x282eaea60>>>
| | | | | | *<UILayoutGuide: 0x2817ee140 - "LeadingBarGuide(0x159d59720)", layoutFrame = {{8, 0}, {0, 44}}, owningView = <_UINavigationBarContentView: 0x159d59260; frame = (0 0; 375 44); layer = <CALayer: 0x282eaea60>>>
| | | | | | *<UILayoutGuide: 0x2817ee220 - "TitleView(0x159d59720)", layoutFrame = {{8, 0}, {359, 44}}, owningView = <_UINavigationBarContentView: 0x159d59260; frame = (0 0; 375 44); layer = <CALayer: 0x282eaea60>>>
| | | | | | *<UILayoutGuide: 0x2817ee300 - "TrailingBarGuide(0x159d59720)", layoutFrame = {{367, 0}, {0, 44}}, owningView = <_UINavigationBarContentView: 0x159d59260; frame = (0 0; 375 44); layer = <CALayer: 0x282eaea60>>>
| | | | | | *<UILayoutGuide: 0x2817ee3e0 - "UIViewLayoutMarginsGuide", layoutFrame = {{16, 0}, {343, 44}}, owningView = <_UINavigationBarContentView: 0x159d59260; frame = (0 0; 375 44); layer = <CALayer: 0x282eaea60>>>
| | | | | | *<UILayoutGuide: 0x2817e5ce0 - "UIViewSafeAreaLayoutGuide", layoutFrame = {{0, 0}, {375, 44}}, owningView = <_UINavigationBarContentView: 0x159d59260; frame = (0 0; 375 44); layer = <CALayer: 0x282eaea60>>>
| | | | | | *_UITAMICAdaptorView:0x15b50e7a0
| | | | | | | DAT_Air_Vinyl.RecordViewNavigationBar:0x15b504370
| | | | | | | | UIImageView:0x15b50d430
| | | | | | | | DAT_Air_Vinyl.ScrollableTextView:0x15b50d660
| | | | | | | | | UIScrollView:0x15c017200
| | | | | | | | | | DAT_Air_Vinyl.MainLabel:0x15b50d8c0'DAT-Air WSS (95B6DC)'
| | | | | | | | | | DAT_Air_Vinyl.MainLabel:0x15b50dbb0'DAT-Air WSS (95B6DC)'
| | | | | | | | UIButton:0x15b50dea0
| | | | | | | | | UIImageView:0x159d773f0
| | | | | _UINavigationBarModernPromptView:0x159d5a0b0
| | *UIView:0x159d369b0
| | | *DAT_Air_Vinyl.MiniPlayerView:0x159d458c0
| | | | *<UILayoutGuide: 0x2817edb20 - "UIViewSafeAreaLayoutGuide", layoutFrame = {{0, 0}, {375, 1}}, owningView = <DAT_Air_Vinyl.MiniPlayerView: 0x159d458c0; frame = (0 0; 375 1); autoresize = W+H; gestureRecognizers = <NSArray: 0x2820829a0>; layer = <CALayer: 0x282eda960>>>
| | | | *UIImageView:0x159e0ec60
| | | | *DAT_Air_Vinyl.ScrollableTextView:0x159e10b20
| | | | | UIScrollView:0x15a828200
| | | | | | DAT_Air_Vinyl.MainLabel:0x159e10f80
| | | | | | DAT_Air_Vinyl.MainLabel:0x159d482b0
| | | | *UIButton:0x159d44cf0
| | | | | UIImageView:0x159d3ea10
| | | | *UIView:0x159d489a0
| | | | *UIView:0x159d48b80
| | *UITabBar:0x159d36b90
| | | _UIBarBackground:0x159d3ec60
| | | | UIImageView:0x159d3f390
| | | | UIVisualEffectView:0x159d3f5c0
| | | | | _UIVisualEffectBackdropView:0x159d47010
| | | | | _UIVisualEffectSubview:0x159d3c420
| | | | | _UIVisualEffectSubview:0x159d3da80
| | | UITabBarButton:0x159d404b0
| | | | UITabBarSwappableImageView:0x159d33330
| | | | UITabBarButtonLabel:0x159d407d0'Library'
| | | UITabBarButton:0x159d414f0
| | | | UITabBarSwappableImageView:0x159d41d10
| | | | UITabBarButtonLabel:0x159d419f0'Record'
| | | UITabBarButton:0x159d42840
| | | | UITabBarSwappableImageView:0x159d43060
| | | | UITabBarButtonLabel:0x159d42d40'Exports'
| | | UITabBarButton:0x159d43b90
| | | | UITabBarSwappableImageView:0x159d443b0
| | | | UITabBarButtonLabel:0x159d44090'Settings'
| +UITransitionView:0x15b1084e0- AMBIGUOUS LAYOUT for UITransitionView:0x15b1084e0.minX{id: 1503}, UITransitionView:0x15b1084e0.minY{id: 1480}, UITransitionView:0x15b1084e0.Width{id: 1608}, UITransitionView:0x15b1084e0.Height{id: 1481}
| | UIView:0x15b519280
| | *_UIKeyboardLayoutAlignmentView:0x15b518e50- AMBIGUOUS LAYOUT for _UIKeyboardLayoutAlignmentView:0x15b518e50.minY{id: 1478}
| | *_UIAlertControllerView:0x15d815e00- AMBIGUOUS LAYOUT for _UIAlertControllerView:0x15d815e00.minX{id: 1609}, _UIAlertControllerView:0x15d815e00.minY{id: 1610}, _UIAlertControllerView:0x15d815e00.Width{id: 1611}, _UIAlertControllerView:0x15d815e00.Height{id: 1612}
| | | *UIView:0x15b103200- AMBIGUOUS LAYOUT for UIView:0x15b103200.minX{id: 1582}, UIView:0x15b103200.minY{id: 1601}
| | | | *_UIAlertControllerInterfaceActionGroupView:0x15b510760- AMBIGUOUS LAYOUT for _UIAlertControllerInterfaceActionGroupView:0x15b510760.minX{id: 1522}
| | | | | *<_UIContentConstraintsLayoutGuide: 0x15b510c40 - "", layoutFrame = {{0, 0}, {39, 171.66666666666666}}, owningView = <_UIAlertControllerInterfaceActionGroupView: 0x15b510760; frame = (0 0; 0 0); opaque = NO; gestureRecognizers = <NSArray: 0x282061bf0>; layer = <CALayer: 0x282e8dde0>>>
| | | | | *UIView:0x15b510fb0
| | | | | | *_UIInterfaceActionGroupHeaderScrollView:0x15c035000
| | | | | | | *<_UIScrollViewContentOffsetGuide: 0x2817e6140 - "UIScrollView-contentOffsetLayoutGuide", layoutFrame = {{0, 0}, {0, 0}}, owningView = <_UIInterfaceActionGroupHeaderScrollView: 0x15c035000; frame = (0 0; 0 0); clipsToBounds = YES; gestureRecognizers = <NSArray: 0x282060ab0>; layer = <CALayer: 0x282ea68c0>; contentOffset: {0, 0}; contentSize: {0, 0}; adjustedContentInset: {0, 0, 0, 0}>>
| | | | | | | *UIView:0x15b515c10
| | | | | | | | *UIView:0x15b515df0- AMBIGUOUS LAYOUT for UIView:0x15b515df0.minX{id: 1580}, UIView:0x15b515df0.minY{id: 1812}, UIView:0x15b515df0.Height{id: 1813}
| | | | | | | | *UIView:0x15b515fd0- AMBIGUOUS LAYOUT for UIView:0x15b515fd0.minX{id: 1589}, UIView:0x15b515fd0.minY{id: 1814}, UIView:0x15b515fd0.Width{id: 1588}, UIView:0x15b515fd0.Height{id: 1815}
| | | | | | | | *UIView:0x15b5161b0- AMBIGUOUS LAYOUT for UIView:0x15b5161b0.minX{id: 1599}, UIView:0x15b5161b0.minY{id: 1816}
| | | | | | *groupView.actionsSequence...:0x15c032c00
| | | | | | | +actions-separatableSequen...:0x15b511550
| | | | | | | | •actions-separatableSequen...:0x15b511970
| | | | | | | | | *_UIInterfaceActionCustomViewRepresentationView:0x15b10cc30- AMBIGUOUS LAYOUT for _UIInterfaceActionCustomViewRepresentationView:0x15b10cc30.Height{id: 1650}
| | | | | | | | | | +_UIAlertControllerActionView:0x15b516e90
| | | | | | | | | | | *UIView:0x15b517370
| | | | | | | | | | | | *UILabel:0x15b517550'Show devices'
| | | | | | | | | *_UIInterfaceActionItemSeparatorView_iOS:0x15b60c530- AMBIGUOUS LAYOUT for _UIInterfaceActionItemSeparatorView_iOS:0x15b60c530.minY{id: 1725}
| | | | | | | | | | UIView:0x15b60c930
| | | | | | | | | | UIView:0x15b60cb10
| | | | | | | | | *_UIInterfaceActionCustomViewRepresentationView:0x15b60bda0- AMBIGUOUS LAYOUT for _UIInterfaceActionCustomViewRepresentationView:0x15b60bda0.minY{id: 1727}, _UIInterfaceActionCustomViewRepresentationView:0x15b60bda0.Height{id: 1700}
| | | | | | | | | | +_UIAlertControllerActionView:0x15b517840
| | | | | | | | | | | *UIView:0x15b517b20
| | | | | | | | | | | | *UILabel:0x15b517d00'Add device'
| | | | | | | | | *_UIInterfaceActionItemSeparatorView_iOS:0x15b60d3c0- AMBIGUOUS LAYOUT for _UIInterfaceActionItemSeparatorView_iOS:0x15b60d3c0.minY{id: 1729}
| | | | | | | | | | UIView:0x15b60d5c0
| | | | | | | | | | UIView:0x15b60d7a0
| | | | | | | | | *_UIInterfaceActionCustomViewRepresentationView:0x15b60bfe0- AMBIGUOUS LAYOUT for _UIInterfaceActionCustomViewRepresentationView:0x15b60bfe0.minY{id: 1723}, _UIInterfaceActionCustomViewRepresentationView:0x15b60bfe0.Height{id: 1653}
| | | | | | | | | | +_UIAlertControllerActionView:0x15b517ff0
| | | | | | | | | | | *UIView:0x15b5182d0
| | | | | | | | | | | | *UILabel:0x15b5184b0'Cancel'
| | | | | *_UIDimmingKnockoutBackdropView:0x15b512570
| | | | | | UIView:0x15b513340
| | | | | | UIVisualEffectView:0x15b512970
| | | | | | | _UIVisualEffectBackdropView:0x15b512f30
| | | | | | | _UIVisualEffectSubview:0x15b513140
The Constraint issue only seems to happen with an iPhoneX. Ipads and iPhone 7 and 7 plus don't show the the issue.
Thank you.
I got similar issue in iPad.
I fixed it removeing sourceRect and change view.frame to CGRect(x, y, 0, 0).
So, How about removing moreActionsCellSheetController.popoverPresentationController?.sourceRect in your code?
sourceRect means UIAlertController's position. So I think you'd better setting it like CGRect.
Thanks.

MemSQL takes 15GB memory for 10MB of data

I have installed memsql 5.1.2 in following manner with following resources.
Google cloud server
HDD: 100GB
Machine type: n1-standard-4 (4 vCPUs, 15 GB memory)
Implementation:
2 MEMSQL NODES running on same machine on the following ports
3306 Master Aggregator
3307 Leaf
Resource Utilization:
Memory 14.16 GB / 14.69 GB
Paging 0 B/s
Database size - 10MB
1818 memsql 1.1% 77% /var/lib/memsql/leaf-3307/memsqld --defaults-file=/var/lib/memsql/leaf-3307/memsql.cnf --pid-file=/var/lib/memsql/leaf-3307/data/memsqld.pid --user=memsql
2736 memsql 0.3% 16% /var/lib/memsql/master-3306/memsqld --defaults-file=/var/lib/memsql/master-330
Note: There is no Swap memory implemented in the server.
Database size is taken by running a query on information_schema.TABLES.
All data resides as row store since we have to run queries by considering many relationships among tables.
As soon as the memsql is up the memory goes up to 70% and it keep on increasing and after 2-3 hours memsql gives the following error when try connect with it and connection also can not be done after that.
OperationalError: (1836, "Leaf 'xx.xxx.x.xx':3307 failed while executing this query. Try re-running the query.")
[Mon Mar 27 09:26:31.163455 2017] [:error] [pid 1718] [remote xxx.xxx.xxx.xxx:9956]
The only solution is to restart the server since it has taken up all the memory.
What I can do for this? Is there an issue in the way it's implemented? Any logs should I attach here?
Show status extended; query gives the following result
+-------------------------------------+------------------------------------------------------------------------+
| Variable_name | Value |
+-------------------------------------+------------------------------------------------------------------------+
| Aborted_clients | 48 |
| Aborted_connects | 1 |
| Bytes_received | 85962135 |
| Bytes_sent | 545322701 |
| Connections | 1626 |
| Max_used_connections | 69 |
| Queries | 364793 |
| Questions | 364793 |
| Threads_cached | 19 |
| Threads_connected | 50 |
| Threads_created | 69 |
| Threads_running | 1 |
| Threads_background | 1 |
| Threads_idle | 0 |
| Ready_queue | 0 |
| Idle_queue | 0 |
| Context_switches | 1626 |
| Context_switch_misses | 0 |
| Uptime | 22270 |
| Auto_attach_remaining_seconds | 0 |
| Data_directory | /var/lib/memsql/leaf-3307/data |
| Plancache_directory | /var/lib/memsql/leaf-3307/plancache |
| Transaction_logs_directory | /var/lib/memsql/leaf-3307/data/logs |
| Segments_directory | /var/lib/memsql/leaf-3307/data/columns |
| Snapshots_directory | /var/lib/memsql/leaf-3307/data/snapshots |
| Threads_waiting_for_disk_space | 0 |
| Seconds_until_expiration | -1 |
| License_key | 11111111111111111111111111111111 |
| License_type | community |
| Query_compilations | 62 |
| Query_compilation_failures | 0 |
| GCed_versions_last_sweep | 0 |
| Average_garbage_collection_duration | 21 ms |
| Total_server_memory | 9791.4 MB |
| Alloc_thread_stacks | 70.0 MB |
| Malloc_active_memory | 1254.7 (+0.0) MB |
| Malloc_cumulative_memory | 7315.5 (+0.2) MB |
| Buffer_manager_memory | 1787.8 MB |
| Buffer_manager_cached_memory | 77.2 (-0.1) MB |
| Buffer_manager_unrecycled_memory | 0.0 MB |
| Alloc_skiplist_tower | 263.8 MB |
| Alloc_variable | 501.4 MB |
| Alloc_large_variable | 2.4 MB |
| Alloc_table_primary | 752.6 MB |
| Alloc_deleted_version | 92.9 MB |
| Alloc_internal_key_node | 72.1 MB |
| Alloc_hash_buckets | 459.1 MB |
| Alloc_table_metadata_cache | 1.1 MB |
| Alloc_unit_images | 34.8 MB |
| Alloc_unit_ifn_thunks | 0.6 MB |
| Alloc_object_code_images | 11.6 MB |
| Alloc_compiled_unit_sections | 17.3 MB |
| Alloc_databases_list_entry | 17.9 MB |
| Alloc_plan_cache | 0.1 MB |
| Alloc_replication_large | 232.0 MB |
| Alloc_durability_large | 7239.1 MB |
| Alloc_sharding_partitions | 0.1 MB |
| Alloc_security | 0.1 MB |
| Alloc_log_replay | 0.9 MB |
| Alloc_client_connection | 3.0 MB |
| Alloc_protocol_packet | 6.1 (+0.1) MB |
| Alloc_large_incremental | 0.8 MB |
| Alloc_table_memory | 2144.2 MB |
| Alloc_variable_bucket_16 | allocs:10877846 alloc_MB:166.0 buffer_MB:179.0 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_24 | allocs:4275659 alloc_MB:97.9 buffer_MB:106.8 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_32 | allocs:2875801 alloc_MB:87.8 buffer_MB:93.4 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_40 | allocs:724489 alloc_MB:27.6 buffer_MB:31.0 cached_buffer_MB:1.2 |
| Alloc_variable_bucket_48 | allocs:377060 alloc_MB:17.3 buffer_MB:19.8 cached_buffer_MB:0.9 |
| Alloc_variable_bucket_56 | allocs:228720 alloc_MB:12.2 buffer_MB:14.0 cached_buffer_MB:0.8 |
| Alloc_variable_bucket_64 | allocs:150214 alloc_MB:9.2 buffer_MB:10.1 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_72 | allocs:35264 alloc_MB:2.4 buffer_MB:2.9 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_80 | allocs:14920 alloc_MB:1.1 buffer_MB:1.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_88 | allocs:5582 alloc_MB:0.5 buffer_MB:0.6 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_104 | allocs:8075 alloc_MB:0.8 buffer_MB:1.0 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_128 | allocs:8892 alloc_MB:1.1 buffer_MB:1.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_160 | allocs:17614 alloc_MB:2.7 buffer_MB:3.0 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_200 | allocs:30454 alloc_MB:5.8 buffer_MB:6.9 cached_buffer_MB:0.6 |
| Alloc_variable_bucket_248 | allocs:4875 alloc_MB:1.2 buffer_MB:1.5 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_312 | allocs:371 alloc_MB:0.1 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_384 | allocs:30 alloc_MB:0.0 buffer_MB:0.1 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_480 | allocs:11 alloc_MB:0.0 buffer_MB:0.1 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_600 | allocs:57 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_752 | allocs:62 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_936 | allocs:42 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_1168 | allocs:106 alloc_MB:0.1 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_1480 | allocs:126 alloc_MB:0.2 buffer_MB:0.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_1832 | allocs:0 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_2288 | allocs:1 alloc_MB:0.0 buffer_MB:0.2 cached_buffer_MB:0.1 |
| Alloc_variable_bucket_2832 | allocs:33 alloc_MB:0.1 buffer_MB:1.1 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_3528 | allocs:16 alloc_MB:0.1 buffer_MB:0.5 cached_buffer_MB:0.1 |
| Alloc_variable_bucket_4504 | allocs:49 alloc_MB:0.2 buffer_MB:0.8 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_5680 | allocs:66 alloc_MB:0.4 buffer_MB:1.2 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_6224 | allocs:30 alloc_MB:0.2 buffer_MB:1.0 cached_buffer_MB:0.1 |
| Alloc_variable_bucket_7264 | allocs:94 alloc_MB:0.7 buffer_MB:1.5 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_9344 | allocs:70 alloc_MB:0.6 buffer_MB:2.6 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_11896 | allocs:14 alloc_MB:0.2 buffer_MB:2.4 cached_buffer_MB:1.2 |
| Alloc_variable_bucket_14544 | allocs:7 alloc_MB:0.1 buffer_MB:2.4 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_18696 | allocs:18 alloc_MB:0.3 buffer_MB:3.2 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_21816 | allocs:4 alloc_MB:0.1 buffer_MB:0.4 cached_buffer_MB:0.0 |
| Alloc_variable_bucket_26184 | allocs:6 alloc_MB:0.1 buffer_MB:0.9 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_32728 | allocs:13 alloc_MB:0.4 buffer_MB:2.4 cached_buffer_MB:1.4 |
| Alloc_variable_bucket_43648 | allocs:12 alloc_MB:0.5 buffer_MB:1.4 cached_buffer_MB:0.2 |
| Alloc_variable_bucket_65472 | allocs:7 alloc_MB:0.4 buffer_MB:2.8 cached_buffer_MB:1.9 |
| Alloc_variable_bucket_130960 | allocs:3 alloc_MB:0.4 buffer_MB:2.2 cached_buffer_MB:1.9 |
| Alloc_variable_cached_buffers | 21.4 MB |
| Alloc_variable_allocated | 438.7 MB |
| Successful_read_queries | 9048 |
| Successful_write_queries | 19096 |
| Failed_read_queries | 0 |
| Failed_write_queries | 4 |
| Rows_returned_by_reads | 75939 |
| Rows_affected_by_writes | 245 |
| Execution_time_of_reads | 7864 ms |
| Execution_time_of_write | 180311 ms |
| Transaction_buffer_wait_time | 0 ms |
| Transaction_log_flush_wait_time | 0 ms |
| Row_lock_wait_time | 0 ms |
| Ssl_accept_renegotiates | 0 |
| Ssl_accepts | 0 |
| Ssl_callback_cache_hits | 0 |
| Ssl_client_connects | 0 |
| Ssl_connect_renegotiates | 0 |
| Ssl_ctx_verify_depth | 18446744073709551615 |
| Ssl_ctx_verify_mode | 0 |
| Ssl_default_timeout | 0 |
| Ssl_finished_accepts | 0 |
| Ssl_finished_connects | 0 |
| Ssl_session_cache_hits | 0 |
| Ssl_session_cache_misses | 0 |
| Ssl_session_cache_overflows | 0 |
| Ssl_session_cache_size | 20480 |
| Ssl_session_cache_timeouts | 0 |
| Ssl_sessions_reused | 0 |
| Ssl_used_session_cache_entries | 0 |
| Ssl_verify_depth | 0 |
| Ssl_verify_mode | 0 |
| Ssl_cipher | |
| Ssl_cipher_list | |
| Ssl_version | |
| Ssl_session_cache_mode | SERVER |
+-------------------------------------+------------------------------------------------------------------------+
From the status output, we can see:
10GB total memory on the leaf node
7GB Alloc_durability_large
You can see what these variables mean here: https://help.memsql.com/hc/en-us/articles/115001091386-What-Is-Using-Memory-on-My-Leaves-
Most interesting is the large amount in Alloc_durability_large, which is unusual. Do you have a large number of databases and/or partitions? (You can check by counting the number of rows in SHOW DATABASES EXTENDED on the leaf nodoe.) Each will require a fixed amount of transaction buffer memory (default is 64 MB).

core data Group By fetch all propetries issues

I have a single table
DB_SMS
----------------------------------------
contactId | messageText | to | createdAt
I want fetch the most recently created record for each contactId. For example:
contactId | messageText | to | createdAt
1 | msg01 | a | 2015-04-20
1 | msg02 | b | 2015-04-21
2 | msg03 | c | 2015-04-20
3 | msg04 | d | 2015-04-20
3 | msg05 | w | 2015-04-22
Required result
contactId | messageText | to | createdAt
1 | msg02 | b | 2015-04-21
2 | msg03 | c | 2015-04-20
3 | msg05 | w | 2015-04-22
I am using following code
NSFetchRequest * theRequest = [NSFetchRequest fetchRequestWithEntityName:#"DB_SMS"];
[theRequest setResultType:NSDictionaryResultType];
[theRequest setPropertiesToFetch:#[#"contactId",#"messageText",#"to",#"createdAt"]];
[theRequest setPropertiesToGroupBy:#[#"contactId"]];
[theRequest setSortDescriptors:#[[NSSortDescriptor sortDescriptorWithKey:#"createdAt" ascending:NO]]];
NSArray * theArray = [theDbHandler.managedObjectContext executeFetchRequest:theRequest error:nil];
i'm getting following exception
SELECT clauses in queries with GROUP BY components can only contain properties named in the GROUP BY or aggregate functions
plz suggest some solution .

Closing a xxforms:dialog which contains a xbl datatable generates an unknown error

Using 4.0.0 Beta 3.
We have a dialog using xxforms:dialog which is opened using xxforms:show and closed using xxforms:hide (either from an action button or the X (close) button). There is a common thread that if the dialog contains a xbl datatable an unknown error is generated (See below).
Sometimes the dialog hides completely with no client side error, other times the dialog does not re-render the page because a client side error occurs ( Message: 'undefined' is not an object (evaluating 'tooltips[control.id].cfg.getProperty') )
Any thoughts?
+----------------------------------------------------------------------------------------------------------------------+
|An Error has Occurred |
|----------------------------------------------------------------------------------------------------------------------|
|[No error message provided.] |
|----------------------------------------------------------------------------------------------------------------------|
|Application Call Stack |
|----------------------------------------------------------------------------------------------------------------------|
|----------------------------------------------------------------------------------------------------------------------|
|Exception: java.lang.IllegalStateException |
|----------------------------------------------------------------------------------------------------------------------|
|org.orbeon.oxf.xforms.BindingContext$$anonfun$ances|apply |BindingContext.scala | 71|
|org.orbeon.oxf.xforms.BindingContext$$anonfun$ances|apply |BindingContext.scala | 71|
|scala.Option |getOrElse |Option.scala | 108|
|org.orbeon.oxf.xforms.BindingContext |ancestorOrSelfInScope$1 |BindingContext.scala | 71|
|org.orbeon.oxf.xforms.BindingContext |pushVariable |BindingContext.scala | 74|
|org.orbeon.oxf.xforms.XFormsContextStack |scopeVariable |XFormsContextStack.java | 224|
|org.orbeon.oxf.xforms.action.actions.XFormsActionAc|apply |XFormsActionAction.scala | 48|
|org.orbeon.oxf.xforms.action.actions.XFormsActionAc|apply |XFormsActionAction.scala | 41|
|scala.collection.Iterator$class |foreach |Iterator.scala | 772|
|scala.collection.JavaConversions$JIteratorWrapper |foreach |JavaConversions.scala | 573|
|scala.collection.IterableLike$class |foreach |IterableLike.scala | 73|
|scala.collection.JavaConversions$JListWrapper |foreach |JavaConversions.scala | 615|
|org.orbeon.oxf.xforms.action.actions.XFormsActionAc|execute |XFormsActionAction.scala | 41|
|org.orbeon.oxf.xforms.action.XFormsActionInterprete|runSingleIteration |XFormsActionInterpreter.java | 204|
|org.orbeon.oxf.xforms.action.XFormsActionInterprete|runAction |XFormsActionInterpreter.java | 150|
|----------------------------------------------------------------------------------------------------------------------|
|Exception: org.orbeon.oxf.common.ValidationException |
|----------------------------------------------------------------------------------------------------------------------|
|org.orbeon.oxf.common.ValidationException |wrapException |ValidationException.java | 126|
|org.orbeon.oxf.xforms.action.XFormsActionInterprete|runAction |XFormsActionInterpreter.java | 157|
|org.orbeon.oxf.xforms.event.EventHandlerImpl$$anonf|apply$mcV$sp |EventHandlerImpl.scala | 249|
|org.orbeon.oxf.xforms.event.EventHandlerImpl$$anonf|apply |EventHandlerImpl.scala | 249|
|org.orbeon.oxf.xforms.event.EventHandlerImpl$$anonf|apply |EventHandlerImpl.scala | 249|
|org.orbeon.oxf.util.DynamicVariable |withValue |DynamicVariable.scala | 40|
|org.orbeon.oxf.xforms.action.XFormsAPI$ |withScalaAction |XFormsAPI.scala | 39|
|org.orbeon.oxf.xforms.event.EventHandlerImpl |handleEvent |EventHandlerImpl.scala | 248|
|org.orbeon.oxf.xforms.event.Dispatch$$anonfun$dispa|apply$mcV$sp |Dispatch.scala | 79|
|org.orbeon.oxf.xforms.event.Dispatch$$anonfun$dispa|apply |Dispatch.scala | 78|
|org.orbeon.oxf.xforms.event.Dispatch$$anonfun$dispa|apply |Dispatch.scala | 78|
|org.orbeon.oxf.util.Logging$class |withDebug |Logging.scala | 43|
|org.orbeon.oxf.xforms.event.Dispatch$ |withDebug |Dispatch.scala | 22|
|org.orbeon.oxf.xforms.event.Dispatch$$anonfun$dispa|apply |Dispatch.scala | 78|
|org.orbeon.oxf.xforms.event.Dispatch$$anonfun$dispa|apply |Dispatch.scala | 72|
|scala.collection.TraversableLike$WithFilter$$anonfu|apply |TraversableLike.scala | 697|
|scala.collection.LinearSeqOptimized$class |foreach |LinearSeqOptimized.scala | 59|
|scala.collection.immutable.List |foreach |List.scala | 76|
|scala.collection.TraversableLike$WithFilter |map |TraversableLike.scala | 696|
|org.orbeon.oxf.xforms.event.Dispatch$$anonfun$dispa|apply |Dispatch.scala | 72|
|---8<--------8<--------8<--------8<--------8<--------8<--------8<--------8<--------8<--------8<--------8<--------8<---|
|org.orbeon.oxf.util.ScalaUtils$ |withRootException |ScalaUtils.scala | 116|
|org.orbeon.oxf.servlet.OrbeonServlet |service |OrbeonServlet.scala | 67|
|javax.servlet.http.HttpServlet |service |HttpServlet.java | 722|
|org.apache.catalina.core.ApplicationFilterChain |internalDoFilter |ApplicationFilterChain.java | 305|
|org.apache.catalina.core.ApplicationFilterChain |doFilter |ApplicationFilterChain.java | 210|
|org.apache.catalina.core.StandardWrapperValve |invoke |StandardWrapperValve.java | 225|
|org.apache.catalina.core.StandardContextValve |invoke |StandardContextValve.java | 169|
|org.apache.catalina.authenticator.AuthenticatorBase|invoke |AuthenticatorBase.java | 472|
|org.apache.catalina.core.StandardHostValve |invoke |StandardHostValve.java | 168|
|org.apache.catalina.valves.ErrorReportValve |invoke |ErrorReportValve.java | 98|
|org.apache.catalina.valves.AccessLogValve |invoke |AccessLogValve.java | 927|
|org.apache.catalina.valves.RemoteIpValve |invoke |RemoteIpValve.java | 680|
|org.apache.catalina.core.StandardEngineValve |invoke |StandardEngineValve.java | 118|
|org.apache.catalina.connector.CoyoteAdapter |service |CoyoteAdapter.java | 407|
|org.apache.coyote.http11.AbstractHttp11Processor |process |AbstractHttp11Processor.java | 999|
|org.apache.coyote.AbstractProtocol$AbstractConnecti|process |AbstractProtocol.java | 565|
|org.apache.tomcat.util.net.JIoEndpoint$SocketProces|run |JIoEndpoint.java | 309|
|java.util.concurrent.ThreadPoolExecutor |runWorker |ThreadPoolExecutor.java |1110|
|java.util.concurrent.ThreadPoolExecutor$Worker |run |ThreadPoolExecutor.java | 603|
|java.lang.Thread |run |Thread.java | 722|
+----------------------------------------------------------------------------------------------------------------------+"}
This indeed looks like a bug. I created an issue for this on GitHub, and you can watch that thread if you'd like to get updates on this.

Resources