Skip to content

Extract Decoder and Jump chips from the CPU #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

mudge
Copy link

@mudge mudge commented Feb 11, 2015

This is something we discussed during meeting 5b as a potential refactoring. I'm not sure if it makes things any clearer but might be worth reviewing.

@leocassarani
Copy link
Member

I like this a lot, especially how much cleaner the CPU is after extracting the Jump chip.

I also wonder if a possible further improvement might be to only pass the relevant bits of the instruction into the Decode and Jump chips. Naturally if this was a real chip, it would be a little wasteful to have a 16-wide input when we're only using 4 of the bits, although we've generally not been concerned by that throughout the book.

I haven't tested this at all, but what do we think about something like:

Decoder(i=instruction[15], dest=instruction[3..5], loadA=load-a, loadD=load-d, writeM=writeM);
Jump(i=instruction[15], jump=instruction[0..2], ng=negative, zr=zero, out=jump);

@mudge
Copy link
Author

mudge commented Feb 12, 2015

Thanks, @leocassarani; I've updated the PR with your suggested changes. I did wonder about passing only the required bits but was a bit stumped on the naming, what do you think to the updates?

@Ryman
Copy link
Member

Ryman commented Feb 12, 2015

I actually think the original commit is better, as the Decoder chip should deal with the entire decoding process and the Jump chip should be a subcomponent (or deal with exported flags). If we compare to the diagram, there should probably be an output corresponding to each of the circled Cs. (I don't have the book to hand, so just going off a skim of the hdl)

I suggest an API similar to the below:

Decoder {
    IN 
        instruction[16], // Instruction to decode
        ng, // Negative flag from the ALU
        zr // Zero flag from the ALU

   OUT 
       is-computation // 1 if C instruction, 0 if A instruction
       loadA, // 1 if loading A, 0 otherwise
       loadD, // 1 if loading D, 0 otherwise
       writeM, // Write to M?
       zx, // ALU should zero x
       nx, // ALU should negate x
       zy, // ALU should zero y
       ny, // ALU should negate y
       f, // I forget what this does
       no, // ALU should negate output
       y-is-m, // 1 if the ALU should use M as y, 0 if it should use A
       jump // Perform jump?   
}

Jump {
    IN 
        lt, // Jumpflag LT
        eq, // Jumpflag EQ
        gt, // Jumpflag GT
        ng, // Value is negative?
        zr // Value is zero?
    OUT
       jump // Perform jump?
}

This would mean that only the Decoder chip deals with instruction[foo] (decode logic, which there are still 8 instances of in the CPU), the CPU just deals with named output pins and the Jump chip deals with a well defined interface. So in theory if our instruction definition was changed in future we could just change the Decoder chip and be done with it? But perhaps it's all a little overkill 🎱

@mudge
Copy link
Author

mudge commented Feb 15, 2015

There's definitely something appealing about having a single Decoder chip that we interface with: my only hesitation is around achieving out pins like zx which simply slice the instruction into a new wire. Is there a way to do this without a useless chip?

IN instruction[16];
OUT zx;

PARTS:
And(a=instruction[11], b=true, out=zx);

@Ryman
Copy link
Member

Ryman commented Feb 15, 2015

I don't remember coming across anything in the Appendix about doing that (and don't have access to my book for a few days), I guess it's a tradeoff between a min-gate solution and an abstracted more understandable (hopefully) one. It seems like either we're missing something or there's a pretty huge hole in their hdl spec for something that seems basic?

I guess OUT zx = instruction[11] doesn't work?

I do remember reading that you could make a builtin using Java, but you'd still be left with a 'no-op' chip:

Noop(in=instruction[11], out=zx)

But since you would still have to specify in and out, it's not much of an improvement really.
Either way, I feel that having the CPU chip itself being understandable is a key goal, we can have our obfuscated optimised equivalent as an aside perhaps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants