Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Current defaults for disk and memory are not enough for Openshift #4266

Open
izderadicka opened this issue Jul 10, 2024 · 4 comments
Open
Labels
kind/question Start a discussion resolution/invalid This doesn't seem right status/need more information Issue needs more information before it will be looked at

Comments

@izderadicka
Copy link

izderadicka commented Jul 10, 2024

General information

  • OS: Windows
  • Hypervisor: Hyper-V
  • Did you run crc setup before starting it (Yes/No)? Yes
  • Running CRC on: Laptop

CRC version

CRC version: 2.38.0+25b6eb
OpenShift version: 4.15.17

Host Operating System

OS Name:                   Microsoft Windows 11 Enterprise
OS Version:                10.0.22631 N/A Build 22631
OS Manufacturer:           Microsoft Corporation
OS Configuration:          Standalone Workstation
System Type:               x64-based PC
Processor(s):              1 Processor(s) Installed.
                           [01]: AMD64 Family 25 Model 68 Stepping 1 AuthenticAMD ~2701 Mhz
Total Physical Memory:     32,020 MB

Steps to reproduce

After starting Openshift with default configuration of memory {10Gi) and disk (31G) I had problems to work with cluster

  1. Adding additional pods failed - nginx or quarkus example - Evicted because lack of memory of the node
  2. After couple of restarts of Opeshift there were about 800 pods Evicted due Disk pressure - cluster was accessible, but a lot of services were not started - like image registry ....

Expected

Default configuration should enable to play around with Openshift - at least provide space for some new pods and should be stable - not to break down due to lack of resources after simple experiment (nginx and quarkus exapmple) and couple of restarts.

Actual

I think default config of resource is just on the edge to run cluster itself, at least this is my impression, I did not do any fancy stuff, just tried couple of examples and this caused problems.

@izderadicka izderadicka added kind/bug Something isn't working status/need triage labels Jul 10, 2024
@gbraad
Copy link
Contributor

gbraad commented Jul 11, 2024

Defaults have always been to get the cluster operational.

Adding additional pods failed - nginx or quarkus example - Evicted because lack of memory of the node

In that case, you have to add memory. With newer versions, this might become more necessary(or you were able to get away with just the defaults), though it is always necessary to adjust the memory and CPU resources according to the payload you are running. This has always been the case...

@gbraad gbraad added resolution/invalid This doesn't seem right kind/question Start a discussion status/need more information Issue needs more information before it will be looked at and removed kind/bug Something isn't working status/need triage labels Jul 11, 2024
@gbraad
Copy link
Contributor

gbraad commented Jul 11, 2024

nginx or quarkus example

can you indicate which Quarkus example you used? As I would not see why nginx or httpd would cause these issues, as they are very minimal in footprint.

@izderadicka
Copy link
Author

For nginx I used example using Template,
For quarkus I use Basic Quarkus - DevFiles .

I started with Basic Quakus - it did not deploy - I think it was due to memory or maybe image problem, then deleted tried nginx - again problem with memory. Increased memory +2G - directly on VM.

Later after restarts all cluser went bad - with many pods not staring due to disk pressure on node.

Started completly new instance - with 13G memory, 50 G disk - looked better, but did not played with it much.

This issue is just FYI - it was my impression that resources are on their limit - maybe only local problem. I wanted to save some troubles to other new users. Feel free to close at your discretion - setting up these params is easy so I know what to do.

@gbraad
Copy link
Contributor

gbraad commented Jul 12, 2024

I wanted to save some troubles to other new users.

We are trying to improve this situation, but this will mostly be around "messaging"; we will have to report to the user there is pressure on the cluster and resource need to be increased. I agree with your point and would like to determine what can be done to make this more obvious. We might increase the default a little, though this will only delay the actual problem.

It is mostly the memory; on our machines with 32G we mostly assign 16/18G of memory. On Windows/macOS people are more likely to also use the machine for other activities, like browsing and leaving tabs open. This will introduce memory pressure on lower specced machines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Start a discussion resolution/invalid This doesn't seem right status/need more information Issue needs more information before it will be looked at
Projects
None yet
Development

No branches or pull requests

2 participants