-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate-limiter receiving internal IP of proxy container #263
Comments
Hi @sre3, thanks for your interest in my repo! As I understand the issue (please correct me if I'm wrong), we're running Cadence as a service behind a reverse proxy provided by Caddy @ I just replicated the issue on my own deployment with the Cadence-provided nginx reverse proxy. It's logging the proxy's IP regardless of client. Your setup is niche but I think it's revealed to me a problem which has been flying under my radar for a long time. It must have been present in my deployment since I introduced the reverse proxy component, and I think I missed it because I imagine most Cadence users run on a small enough scale where this is not noticeable. Admittedly, Cadence's current rate limiter is a quick in-house invention which does a basic check through Go's (... Of course, on a six year old version of Cadence written in Python, @za419 seems to have already done it.) Lines 602 to 609 in 9709d9a
If I re-implement something like this into the current version, there is a security consideration, as selecting an Thanks for raising this issue. |
Hi, Yes, exactly. One way of mitigating spoofing is in the reverse proxy itself - caddy, for instance, will strip Perhaps a similar system could be implemented in cadence, where only Then with docker's network aliases, I guess you could just set Anyway, that's just my two cents. There is a very exhaustive article on the perils of Also, thank you for creating cadence - I was looking for something reasonably lightweight and simple, and all the other alternatives are either terribly outdated or super resource hungry and overkill (cough cough azuracast cough cough) I was about to try setting up a liquidsoap + icecast setup myself, but cadence is perfect :) |
I would like to preface this issue with a note that my setup is rather unusual - I using an Oracle free tier to mess around with podman before deploying to my main server, and as a result I tend to be a very niche use case.
Initially, I had the stack running rootless with caddy as a reverse proxy. This was working fine, with one notable exception: in rootless podman, all remote address will have an IP of the forwarded container, due to the way podman handles rootless port forwarding (containers/podman#8193). This means, of course that the rate-limiter also receives that IP. Now, being myself, I looked around for alternatives and found this[1], which I am now using.
Here is the run command for the cadence server, for example:
This has the desired effect - caddy now has the proper IP:
Unfortunately, cadence doesn't seem to agree:
Where 10.89.0.7 is caddy's IP on caddynet. This leads me to believe that cadence is not properly trusting external proxies, as caddy sends all required headers by default. Everything else is working fine, as with the completely rootless setup.
[1] There are other options, such as --network slirp4netns:port_handler=slirp4netns and using a pod, but I was too lazy to reconfigure cadence's networking.
The text was updated successfully, but these errors were encountered: