-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
work_mem value can be bad #38
Comments
Yikes, that's no good. Did you set any other flags I should know about? Will definitely look to push a fix ASAP |
I ask because when I try
I get this for memory settings
So I wonder how it ended with 64KB EDIT: Fixed memory flag |
No flags were added and it gave me a value of 41k so it would not run
…On Mon, Mar 4, 2019, 6:50 PM RobAtticus ***@***.***> wrote:
I ask because when I try
timescaledb-tune --memory=32GB --cpus=32
I get this for memory settings
Recommendations based on 120.00 GB of available memory and 32 CPUs for PostgreSQL 11
shared_buffers = 30GB
effective_cache_size = 90GB
maintenance_work_mem = 2047MB
work_mem = 19660kB
So I wonder how it ended with 64KB
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#38 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AhMe_CRj85Gm5jUP3J2Xk2csaVW_Doerks5vTdttgaJpZM4bdfAH>
.
|
Happy to correct the issue with it returning invalid values, but I am a bit worried that is is misreading your settings since it should not be giving 41KB for the given parameters (120GB RAM, 32 CPUs). It would be useful if you could run the following commands from inside the container:
and
Thanks for the bug report |
Here are the results of the commands you sent. Although this machine has a
lot of resources, GKE probably slices it differently which produces those
results.
bash-4.4$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes
268435456
bash-4.4$ free -m | grep 'Mem' | awk '{print $2}'
120873
…On Tue, Mar 5, 2019 at 8:52 AM RobAtticus ***@***.***> wrote:
Closed #38 <#38> via
#39 <#39>.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#38 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AhMe_EzwcakF03OfZDQ9-dBKGOOgcNYWks5vTqDPgaJpZM4bdfAH>
.
|
This got closed by the merge but there seems to be another problem at play here. Specifically, |
bump @roger-rainey |
The memory number was coming from kubernetes request memory which is not
the memory limit.
…On Mon, Mar 11, 2019 at 2:43 PM RobAtticus ***@***.***> wrote:
bump @roger-rainey <https://github.com/roger-rainey>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#38 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AhMe_NZk3mdQPpyLbCUSJnUh-n0QnRXoks5vVs3ygaJpZM4bdfAH>
.
|
@roger-rainey That's intriguing. The cgroups Would you mind posting your k8s configuration for this pod, and any other information you have about resource utilization on the node your pod is on? If it seems like there's still more than |
One other useful bit of info might be the output of
|
@roger-rainey Did you have any follow up on this re: @LeeHampton 's comment? |
The code that calculates work_mem can produce a value that is below the minimum threshold of 64kB. This will prevent timescale/postgress from starting. I have experienced this issue deploying to a k8s node w/ 32 Cores and 120GB Ram. The output from timescale-tune was 41kB which causes postgres to fail to start.
The text was updated successfully, but these errors were encountered: