-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
5xx errors in load generator #1800
Comments
@agardnerIT are all pods running? |
I've turned some off (like OpenSearch, Jaeger, Grafana and accounting service because it keeps crashing) but other than that, yes they are.
|
I deployed this today. |
Sorry @agardnerIT, this question was for @puckpuck 😅 . Locally, we have one issue with |
Side note: It would be good to document which services can be disabled and what effects (if any) that has on the core demo / usecases. For example, I've turned off my accounting service - but I have no idea whether that matters to the "core system" or not. Perhaps a new column / table on this page like:
|
I (probably) have the same problem, but not with Kubernetes, but with local docker. When I open http://localhost:8080 I get this: http://localhost:8080/jaeger/ui works fine, http://localhost:8080/feature/ works fine as well. Next to the standard setup, I have https://github.com/cbos/observability-toolkit coupled with the demo setup.
This is a weird ip address where it tries to connect to: 52.140.110.196 This is what I have in the Envoy proxy (http://localhost:10000/clusters):
Some of the IP addresses are not resolved to local addresses, but remote/public addresses. |
Changing #1768 changed this version from 1.30 to 1.32. cc: @puckpuck I don't know all details, but at least this fixed my problem, my frontend (and other services) are now resolved to local addresses:
|
@agardnerIT and @cbos in what distro of k8s are you running this? I've tried out today with a local |
I can confirm I still see this issue. It takes a few minutes to start to happen, but once it does after every approx 2 mins and 15 seconds, I see 155 new entries to the log with the following output:
This statement is not written to the OTLP logs emitted by the load generator. I only see it in the service's console output logs. I can also replicate this in local docker compose and K8s. Though I can see these errors in console logs, I can't spot anything in the load generator that would indicate an issue is happening. |
Hi I have same issue on GKE but I created my Elastic OTEL demo (2.0.1) in a different namespace than default.
My pods are as follow
My services are as follow
|
I did some investigation on this. The Logs entries are coming from the Playwright browser. Commenting out the lines of code that emit the brower's console output stops the messages from appearing. This is the code on lines 191 and 203 of locustfile.py page.on("console", lambda msg: print(msg.text)) I don't know why the error is getting reported, but this is where it is being reported from. |
Actually it's more than an error because downstream, we see no transactions |
I can continue to see traces in Jaeger for the |
Bug Report
Symptom
Lots and lots of 5xx errors.
What is the expected behavior?
No 5xx errors
What do you expect to see?
What is the actual behavior?
Please describe the actual behavior experienced.
Reproduce
Could you provide the minimum required steps to resolve the issue you're seeing?
We will close this issue if:
Additional Context
curl from on-cluster pod
It works:
The text was updated successfully, but these errors were encountered: