Skip to content

Implement RFC 40: Arbitrary Memory shapes. #1052

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

zyp
Copy link
Contributor

@zyp zyp commented Jan 24, 2024

No description provided.

@@ -831,7 +831,11 @@ def _convert_fragment(builder, fragment, name_map, hierarchy):

if isinstance(fragment, mem.MemoryInstance):
memory = fragment.memory
init = "".join(format(ast.Const(elem, ast.unsigned(memory.width)).value, f"0{memory.width}b") for elem in reversed(memory.init))
if isinstance(memory.shape, ast.ShapeCastable):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please move the const-casting code to Memory.__init__, so that 1) it's deduplicated between RTLIL and sim, 2) any errors are reported earlier, during Memory construction

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

er, to Memory.init setter I guess

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I considered doing it that way, but I figure either way we should keep the uncasted values around so the getter is consistent with the setter. The getter also returns a mutable list, so the following code is valid:

mem = Memory(width = 8, depth = 16)
for i in range(mem.depth):
    mem.init.append(…)

If we want to do the cast in the setter, we should also make the list immutable without going through the setter.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can make MemoryInitializer a new MutableSequence collection that ensures all of the values inside are const-castable.

@@ -1929,6 +1929,8 @@ class ValueSet(_MappedKeySet):

class SignalKey:
def __init__(self, signal):
if isinstance(signal, ValueCastable):
signal = Value.cast(signal)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say this is a consequence of RFC 15; a Signal constructed from a ShapeCastable becomes a ValueCastable around the signal.
When constructing a SignalKey from such a signal, in this case port.data, we need to unwrap the underlying signal.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this comes back to the additions to hdl/xfrm.py.

@@ -831,7 +831,11 @@ def _convert_fragment(builder, fragment, name_map, hierarchy):

if isinstance(fragment, mem.MemoryInstance):
memory = fragment.memory
init = "".join(format(ast.Const(elem, ast.unsigned(memory.width)).value, f"0{memory.width}b") for elem in reversed(memory.init))
if isinstance(memory.shape, ast.ShapeCastable):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can make MemoryInitializer a new MutableSequence collection that ensures all of the values inside are const-castable.

cast = lambda elem: ast.Value.cast(memory.shape.const(elem)).value
else:
cast = lambda elem: elem
init = "".join(format(ast.Const(cast(elem), ast.unsigned(memory.width)).value, f"0{memory.width}b") for elem in reversed(memory.init))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a bunch of repeated work.

depth : int
Word count. This memory contains ``depth`` storage elements.
init : list of int
init : list
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

list of const-castable

@@ -75,7 +107,10 @@ def init(self, new_init):
try:
for addr in range(len(self._array)):
if addr < len(self._init):
self._array[addr].reset = operator.index(self._init[addr])
if isinstance(self.shape, ShapeCastable):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a MemoryInitializer collection would make this code nicer, too.

@@ -254,6 +289,8 @@ class WritePort(Elaboratable):
divide memory width evenly.
"""
def __init__(self, memory, *, domain="sync", granularity=None, src_loc_at=0):
if granularity is not None and isinstance(memory.shape, ShapeCastable) or memory.shape.signed:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a check for not (isinstance(memory.shape, Shape) and memory.shape.unsigned) is clearer.

@@ -276,11 +276,11 @@ def map_memory_ports(self, fragment, new_fragment):
for port in new_fragment.read_ports:
port.en = self.on_value(port.en)
port.addr = self.on_value(port.addr)
port.data = self.on_value(port.data)
port.data = self.on_value(Value.cast(port.data))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this code break the logic of the transformer? @wanda-phi

@whitequark whitequark added this to the 0.5 milestone Jan 30, 2024
@whitequark
Copy link
Member

Superseded by #1142.

@whitequark whitequark closed this Feb 27, 2024
@whitequark whitequark removed this from the 0.5 milestone Feb 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants