You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 29, 2024. It is now read-only.
Hi,
I have seen that if we have invalid ROA's such as below conditions, if the cache server itself rejects such ROA it would be great rather than letting the router to take care.
-->ROA's with prefix length > Max length.i.e.
prefix: "1.1.1.0/32"
Maxlength:30
-->ROA's with Maxlength greater than allowed. i.e 1.1.1.0/64 or Maxlength is >32 for ipv4 and >128 for ipv6.
-->Having non zero number after the prefix length i.e 1.1.1.1/24 or 3000:1:1:1::1/64
I think it is good that if cache server itself finds and avoids publishing the same to the clients. Just a suggestion. We faced issue in our code when we got such prefixes.
The text was updated successfully, but these errors were encountered:
Yes , if i define local ROA's with above said invalid conditions, then the RPKI validator does not filter them and send as is. Yes it would be of great use for the customers if you add such checks before advertising the ROA's. That will be a added benefit. Most probably we would land up in these issues when admin adds few prefixes via slurm.json file.
Example wrong ROA's that are being advertised are defined in slurm.json file as below.
{
"asn": 13336,
"prefix": "1.1.1.1/24", -->Non Zero number after prefix length.i.e instead of 1.1.1.0 it is defined as 1.1.1.1
"maxPrefixLength": 30
},
{
"asn": 13336,
"prefix": "1.1.1.0/32", --> prefix length mentioned is /32 and max length is /30
"maxPrefixLength": 30
},
{
"asn": 13336,
"prefix": "1.1.1.0/24",
"maxPrefixLength": 64 --> for ipv4 prefix length range is <=32, but it is defined as 64
}
}
}
Also you have added only check for duplicate prefixes coming from the server(not locally defined through slurm.json file), if you add the above checks it will help you filter any such wrong updates.
Thanks,
Avinash C
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi,
I have seen that if we have invalid ROA's such as below conditions, if the cache server itself rejects such ROA it would be great rather than letting the router to take care.
-->ROA's with prefix length > Max length.i.e.
prefix: "1.1.1.0/32"
Maxlength:30
-->ROA's with Maxlength greater than allowed. i.e 1.1.1.0/64 or Maxlength is >32 for ipv4 and >128 for ipv6.
-->Having non zero number after the prefix length i.e 1.1.1.1/24 or 3000:1:1:1::1/64
I think it is good that if cache server itself finds and avoids publishing the same to the clients. Just a suggestion. We faced issue in our code when we got such prefixes.
The text was updated successfully, but these errors were encountered: