Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

phy/ecp5rgmii.py: Add support for dynamic link speeds #138

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

rowanG077
Copy link
Contributor

@rowanG077 rowanG077 commented Jul 14, 2023

This is still a draft. It doesn't work yet but if you have time to look at it the general structure should be finalized.

There are 3 main changes to make this work:

  • 1G links use DDR but 100M and 10M do not. Based on the in-band status we use back-pressure on the TX side to stall incoming bytes while the nibbles get transmitted out.
  • On the RX side use the rising edge DDR inputs to capture the incoming nibble. Use an 8-bit shift register to shift in the nibble on every data valid. Alternate outgoing valid on every data valid to ensure the full byte is transmitted into the core.
  • There are two configs that can be used. Either use the RX clock as the TX clock (loop clocking) or provide a 125Mhz clock yourself. The latter should be strongly encouraged but it's not always possible. In the case that a 125Mhz clock is provided a divided down clock to 25Mhz or 2.5Mhz is created inside the Phy depending on link speed just for the purpose of going into the clock forwarding ODDR primitives. This creates a clock enable signal for the TX side so it knows when it needs to present the next nibble. In the case of loop clocking no clock division is done since the RX clock is already at the correct speed.

@rowanG077 rowanG077 force-pushed the ecp5-dynamic-phy-speed branch 2 times, most recently from f7775e3 to b8c7408 Compare July 16, 2023 21:38
@rowanG077 rowanG077 marked this pull request as ready for review July 16, 2023 21:39
@rowanG077 rowanG077 force-pushed the ecp5-dynamic-phy-speed branch 2 times, most recently from b11b0c5 to 45072a6 Compare July 16, 2023 22:42
@enjoy-digital
Copy link
Owner

Thanks @rowanG077, that's also a feature I was interested (for ECP5 and the other RGMII PHYs). I'll try to do a review soon.

@rowanG077
Copy link
Contributor Author

rowanG077 commented Jul 26, 2023

I had some troubles with switching. In turns out that the clock switch MUST be glitchless so I added that.

Dynamic 1G/100M mode works, 10M doesn't.... I really have no clue why. 10M sends the same bytes as 100M. Simulation shows nothing weird.

I tested both the loop clock scenario and the FPGA generated tx clock. In both scenarios 1G/100M work and 10M does not. Maybe I'm missing something?

@enjoy-digital
Copy link
Owner

Great! 1G/100M will probably be the more useful. I'll look at the code to see if I see an obvious reason for the 10M to not work.

@rowanG077
Copy link
Contributor Author

rowanG077 commented Jul 29, 2023

I think it's due to the IPG inserter running at clock cycle speed whereas now with a dynamic link speed this means the IPG is too small for 100M/10M. I probably just was unlucky 100M works fine. I fixed this. I will try in hardware tomorrow.

@rowanG077
Copy link
Contributor Author

rowanG077 commented Jul 30, 2023

This was not the problem... 10M still completely silent. I'm sure that at the very least the RX path is broken. I don't even receive the requests on the FPGA.

So there are quite a few scenarios, but I could not test them all myself due not having a FPGA board with RGMII in-band status support. loopclocking yes or no, in-band status yes or no and external linkstate yes or no. Below is the matrix which indicates what config I tested.

  loopclock | in-band status | linkstate | Tested (1G/100M working)
      y     |      n         |    y      |    y
      n     |      n         |    y      |    y
      y     |      n         |    n      |    y (1G support only)
      n     |      n         |    n      |    y (1G support only)
      y     |      y         |    n      |    n
      n     |      y         |    n      |    n

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants