Description
I think it'd be beneficial to have a built-in pipeline abstraction to make pipelining more concise and less likely to have copy-paste errors. My RFC is based on Silice's pipeline feature.
Proposal
I propose that two methods be added to Module
:
def Pipeline(self, stb, name="pipeline", domain="sync"):
...
def Stage(self, name=None):
...
These would be to create pipelines like this:
class MulAdd(Elaboratable):
def __init__(self, width):
self.a = Signal(width)
self.b = Signal(width)
self.c = Signal(width)
self.stb = Signal()
self.o = Signal(width * 2 + 1)
self.o_stb = Signal()
self.width = width
def elaborate(self, platform):
m = Module()
with m.Pipeline(self.stb) as pln:
with m.Stage():
pln.mul = self.a * self.b
with m.Stage("ADD_ONE"):
pln.added_one = pln.mul + 1
with m.Stage("ADD"):
m.d.sync += self.o.eq(pln.added_one + self.c)
m.d.comb += self.o_stb.eq(pln.o_stb)
return m
When the stb argument to m.Pipeline() is strobed, the pipeline starts on that clock cycle and progresses through the stages. Of course, multiple stages can be running at any given time.
For data to get pipelined through each stage, you set the pipeline. property with the value you want propagated forward through the stages. You can access any value you've set in a previous stage in any future stage. The signal is set within the clock domain that the pipeline was instantiated with.
The pipeline.o_stb property is a signal that is strobed a cycle after the last stage executes (I think that's the right behavior, let me know if not).
A stage can be made invalid by calling the pipeline.invalid_stage() method. This is, of course, propagated forward through the stages.
Alternatives
- No pipeline abstraction is added.
- The
pipeline.<signal name>
support is changed to something more similar to the current way of synchronously setting signals. - Signals are automatically pipelined if set within the pipeline. This was my original thought (and is more similar to how Silice does it), but it turned out to be very difficult to implement.