Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mapping to cgra #85

Open
wants to merge 307 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
307 commits
Select commit Hold shift + click to select a range
5a0998f
correct tile.py
Jul 29, 2023
91ad37f
orig tile files
Jul 31, 2023
3947620
Add in fixes to get tensor apps working
weiya711 Jul 31, 2023
c8bfae3
Add in correct mat_residual
weiya711 Aug 4, 2023
8f1817f
Add in final onyx tensors used for ISSCC benchmarking
weiya711 Aug 9, 2023
74f6d5e
Rename file
weiya711 Aug 9, 2023
870205d
Change suitesparse file formatting
weiya711 Aug 9, 2023
1f9d877
Add in new suitesparse matrices for onyx_final_eval.txt
weiya711 Aug 11, 2023
d2bdbf8
Update final list of datasets again
weiya711 Aug 14, 2023
c3eb538
current matrix tiling flow
Aug 19, 2023
9c64433
updated matrix tiling flow
Aug 20, 2023
5be8b8f
minor matrix tiling updates
Aug 20, 2023
887b0a5
all matrix apps tiling flow
Aug 22, 2023
0f0b87e
loading in same vectors as CPU
Aug 24, 2023
2232c26
max tilesize matmul
Aug 26, 2023
13de226
Add in matmul with crddrop
weiya711 Aug 28, 2023
7b0234a
Update sam graph for matmul
weiya711 Aug 29, 2023
3050c6e
update compue/merge node to handle crddrop, some pycodestyle fixes
mbstrange2 Aug 29, 2023
7fde12e
Add in triangle counting
weiya711 Sep 1, 2023
65679b3
Add in triangle counting to sam-kernels.sh
weiya711 Sep 1, 2023
7ffc703
Add in pagerank/iter solve
weiya711 Sep 1, 2023
7bdb2d7
Add in graphs needed for more complex expressions:
weiya711 Sep 1, 2023
60f4ad7
input arg setup
Sep 2, 2023
f17e3cf
Add in masked triangle counting partial
weiya711 Sep 2, 2023
19080a3
latest update tiling
Sep 2, 2023
a80b449
helper scripts (need cleanup)
Sep 2, 2023
00d70e4
Add in a fix for mat_vecmul_iter.gv
weiya711 Sep 2, 2023
9f73f16
square matrices for vecmul iter
Sep 3, 2023
a192512
moved to right place
Sep 3, 2023
6c9dbd7
changes for new complex apps
Sep 3, 2023
eeccda9
Add in short spmv iter
weiya711 Sep 3, 2023
4bccd11
Add in graph for mask_tri with fiberwrite to act as bypass
weiya711 Sep 4, 2023
b7afe45
Add in fix to mat_mask_tri_fiberwrite
weiya711 Sep 4, 2023
e74efa6
Fix an oopsies
weiya711 Sep 4, 2023
75f91c5
Add in more fixes to mat_mask_tri_fiberwrite.gv
weiya711 Sep 4, 2023
adcb4f9
Add in T to top tensor
weiya711 Sep 4, 2023
6237885
Fix the 'T' mistake
weiya711 Sep 4, 2023
ab2a10d
Add in format
weiya711 Sep 4, 2023
a1440cd
syn mat
Oct 9, 2023
a50f69d
added new apps
kalhankoul96 Oct 12, 2023
ac0261a
updated new graphs for opal and mapping code to cope with the addtion…
bobcheng15 Oct 13, 2023
0657854
updated the color of max in the graph
bobcheng15 Oct 13, 2023
e4e64a0
updated sam graph mapping code to take the new cmrg_mode signal into …
bobcheng15 Oct 13, 2023
3e78adc
update data type of the default mode value
bobcheng15 Oct 16, 2023
8540155
include sam simulation code that models compression using crddrop
bobcheng15 Oct 17, 2023
313b50a
Merge branch 'mapping_to_cgra_opal' of https://github.com/weiya711/sa…
bobcheng15 Oct 17, 2023
05323c7
fixed style issues that are failing the CI
bobcheng15 Oct 17, 2023
cdbea75
fixed style issues that are failing the CI
bobcheng15 Oct 17, 2023
0fb04fd
removed opal graphs that are not ready yet
bobcheng15 Oct 17, 2023
5dd14cd
added unit test for valdropper
bobcheng15 Oct 17, 2023
aa3d1ed
Merge pull request #96 from weiya711/mapping_to_cgra_opal
kalhankoul96 Oct 18, 2023
8c42d9a
matmul + relu
kalhankoul96 Oct 30, 2023
b014c06
added spmm kernel
bobcheng15 Nov 1, 2023
1cb638a
remove unwanted benchmarks
bobcheng15 Nov 1, 2023
acd05d0
added masked_broadcast and trans_masked_broadcast
bobcheng15 Nov 2, 2023
e63d352
fix style
kalhankoul96 Nov 2, 2023
2d52bfd
Add VR mode config reg hard coded to 0
mcoduoza Nov 3, 2023
1623d46
Merge branch 'mapping_to_cgra' into mcoduoza-vector-accum
mcoduoza Nov 3, 2023
5498dc1
Merge pull request #99 from weiya711/mcoduoza-vector-accum
kalhankoul96 Nov 6, 2023
550fe3f
Merge remote-tracking branch 'origin/mapping_to_cgra' into add_sparse…
bobcheng15 Nov 6, 2023
09490d7
Merge pull request #100 from weiya711/add_sparse_ml_kernel
bobcheng15 Nov 7, 2023
0478df1
Merge branch 'matmul_relu' into cgra_fp_op_support
bobcheng15 Nov 8, 2023
0dafd00
add support for floating point matrix generation and parsing fp_mul f…
bobcheng15 Nov 8, 2023
6a0b322
added support for genearting and dumping fp16 matrices
bobcheng15 Nov 9, 2023
ef31fa3
update configuration code for pe and reduce to account for the new co…
bobcheng15 Nov 15, 2023
d0793dd
update port name for connection from reduce to reduce, also updated t…
bobcheng15 Nov 15, 2023
b39b2b4
fix code style
bobcheng15 Nov 15, 2023
cade108
Merge pull request #101 from weiya711/mapping_to_cgra_update_reduce
bobcheng15 Nov 15, 2023
54c66b7
Add VR SAM updates
mcoduoza Nov 16, 2023
d089dfb
Add missing primitives
mcoduoza Nov 16, 2023
f28f02a
Update spacc tests
mcoduoza Nov 16, 2023
e84a882
New graphs
mcoduoza Nov 17, 2023
a724a56
Revert "Update spacc tests"
mcoduoza Nov 17, 2023
d48e1fc
Revert "New graphs"
mcoduoza Nov 17, 2023
892cda6
Add the graph
mcoduoza Nov 17, 2023
820d6b8
Roll back accumulator.py; owhsu to merge later
mcoduoza Nov 17, 2023
8b313f8
Style fix
mcoduoza Nov 17, 2023
5deea43
Merge pull request #102 from weiya711/vector-accum-mapping
mcoduoza Nov 17, 2023
cc36ddf
Initial attempt at bringing crddrop back
mcoduoza Nov 22, 2023
ef8043b
More connection rules for ikj crddrop support
mcoduoza Nov 22, 2023
b99e4a2
add 'exp' to the list of glb name to be annotated
bobcheng15 Nov 22, 2023
8a3c937
Merge remote-tracking branch 'origin/mapping_to_cgra' into matmul_relu
bobcheng15 Nov 23, 2023
2d9960b
added code to support routing from ComputeNode to Max, added graph fo…
bobcheng15 Nov 25, 2023
09458bb
Update graph to crddrop doesn't block upstream
mcoduoza Nov 26, 2023
c9c0193
Style fix
mcoduoza Nov 26, 2023
428033b
Merge pull request #104 from weiya711/ikj_crddrop_support
mcoduoza Nov 26, 2023
a8cd9a8
Merge branch 'mapping_to_cgra' into matmul_relu
bobcheng15 Nov 26, 2023
279a49c
Merge pull request #103 from weiya711/matmul_relu
bobcheng15 Nov 27, 2023
9c71a5e
Merge branch 'mapping_to_cgra' into exp_glb_config
bobcheng15 Nov 27, 2023
a74f470
add parsing and configuration support for the ops required by exp
bobcheng15 Nov 28, 2023
9698914
Merge branch 'mapping_to_cgra' into cgra_fp_op_support
bobcheng15 Nov 28, 2023
d699636
merged fp_op_support
bobcheng15 Nov 28, 2023
30b1224
added support to configure one of the operand for fp_mul and and as a…
bobcheng15 Nov 28, 2023
f013dd3
update get_matrix_from file function to enable bf16
bobcheng15 Nov 28, 2023
d4e6eeb
Merge branch 'cgra_fp_op_support' of github.com:weiya711/sam into cgr…
bobcheng15 Nov 28, 2023
3ec89e5
Merge branch 'cgra_fp_op_support' into exp_glb_config
bobcheng15 Nov 28, 2023
18e2b2d
fixed bug in decoding the rb_const value for fp_mul and and
bobcheng15 Nov 28, 2023
c82d99e
added graph for spmv and spmv_relu
bobcheng15 Dec 7, 2023
b40ba61
Merge pull request #105 from weiya711/add_sparse_ml_kernel
bobcheng15 Dec 11, 2023
d132d85
added mapping and routing support for fp_max, fp_add and faddiexp ins…
bobcheng15 Dec 14, 2023
ff8f544
update matrix generation code to avoid turning 0 into a very small value
bobcheng15 Dec 14, 2023
e72c606
Merge branch 'mapping_to_cgra' into exp_glb_config
bobcheng15 Dec 14, 2023
3992942
add graph of mat_elemadd_leaky_relu.gv
bobcheng15 Dec 14, 2023
ee9ef68
added lassen to requirements.txt
bobcheng15 Dec 14, 2023
88699fa
added peak to requirements.txt
bobcheng15 Dec 14, 2023
e7dabb5
remove lassen and peak dependencies from requirements.txt and move th…
bobcheng15 Dec 14, 2023
9cb68b7
update peak and lassen installation script
bobcheng15 Dec 14, 2023
973b47d
remove peak and lassen directory after installation so the linter doe…
bobcheng15 Dec 14, 2023
58c43a2
add peak and lassen to the exclude list when running flake8
bobcheng15 Dec 14, 2023
55202c5
add peak and lassen to the exclude list when running flake8
bobcheng15 Dec 14, 2023
6442eb3
fix syntax error in the --exclude argument of flake8
bobcheng15 Dec 14, 2023
e01e93f
fix code stype
bobcheng15 Dec 14, 2023
2c699f9
fixes for suitesparse apps
Dec 14, 2023
b2d3c7d
add crddrops to mat_vecmul
kalhankoul96 Dec 14, 2023
f308aed
merge in tiling branch
kalhankoul96 Dec 18, 2023
c2a4d73
tiling script cleanup
Dec 19, 2023
768f5fa
merged tiling branch with mapping_to_cgra dev branch
kalhankoul96 Dec 19, 2023
9fc0993
Merge pull request #106 from weiya711/exp_glb_config
bobcheng15 Jan 6, 2024
50cd072
added support for multiple tiles
kalhankoul96 Jan 8, 2024
0dc4a47
Merge branch 'mapping_to_cgra' into mapping_to_cgra_suitesparse_fixes
kalhankoul96 Jan 9, 2024
1558668
style fixing
kalhankoul96 Jan 9, 2024
2c067d1
more style cleanup
kalhankoul96 Jan 9, 2024
1134683
more style fixes
kalhankoul96 Jan 9, 2024
cd2dda3
more style fixes
kalhankoul96 Jan 9, 2024
7af1acc
more style fixes
kalhankoul96 Jan 9, 2024
4d03070
more style fixes
kalhankoul96 Jan 9, 2024
0b78457
Merge pull request #107 from weiya711/mapping_to_cgra_suitesparse_fixes
kalhankoul96 Jan 9, 2024
0a24c86
fix import path
kalhankoul96 Jan 10, 2024
8d535b9
updated the graph of spmm_ijk_crdddrop_relu to avoid deadlock in the …
bobcheng15 Jan 11, 2024
2e58788
fixes potential primitve deadlock in the graph of matmul_ijk_crddrop_…
bobcheng15 Jan 11, 2024
7d5cc49
Merge pull request #108 from weiya711/sparse_merge_fixes
bobcheng15 Jan 12, 2024
4f2ef17
tensor tiling
Jan 14, 2024
41da283
fix link
kalhankoul96 Jan 14, 2024
dc39c76
fix link2
kalhankoul96 Jan 14, 2024
6ba4851
Merge pull request #110 from weiya711/lint_fix
kalhankoul96 Jan 14, 2024
4567a7d
generate tensor tile formats
Jan 15, 2024
3b00f74
Merge branch 'mapping_to_cgra' of https://github.com/weiya711/sam int…
Jan 15, 2024
016c075
debug with akhilesh
kalhankoul96 Jan 17, 2024
8946ec5
updated parse.dot and compute_node.py to generate coreir and use meta…
bobcheng15 Jan 20, 2024
56744d9
update mapped input port storing logic, compute to compute connetion …
bobcheng15 Jan 21, 2024
6ded96d
update the intersect-to-compute connection logic to use ports metamap…
bobcheng15 Jan 22, 2024
126a18b
alu mapping with metamapper now happens in a standalone function, als…
bobcheng15 Jan 22, 2024
4187b78
parse_dot now dumps the alu coreir spec and mapped alu coreir spec in…
bobcheng15 Jan 22, 2024
93a2846
remove unwanted file
bobcheng15 Jan 22, 2024
c5b457b
remove unwanted breakpoint
bobcheng15 Jan 22, 2024
7f2ae26
clean up tiling flow
Jan 22, 2024
3a8a485
style cleanup
Jan 22, 2024
c3f9705
pointing to right taco
Jan 22, 2024
47ca51a
fixed matrix flow
kalhankoul96 Jan 22, 2024
1cebe89
Merge pull request #112 from weiya711/mapping_to_cgra_tensor_debug
kalhankoul96 Jan 22, 2024
e553264
Merge remote-tracking branch 'origin/mapping_to_cgra' into sparse_met…
bobcheng15 Jan 23, 2024
8cf35f8
update parse_dot and compute_node along with relu graphs to support m…
bobcheng15 Jan 23, 2024
010929a
fix code style
bobcheng15 Jan 23, 2024
97cef1e
update compute_node and reduce_node to remove the hacked connection t…
bobcheng15 Jan 23, 2024
a326102
remove the need to list all compute node type in map_app() within pas…
bobcheng15 Jan 23, 2024
5b4f346
remove unwanted garnet_PE.v
bobcheng15 Jan 23, 2024
ca67069
updated parse_dot.py to support floating point ops
bobcheng15 Jan 27, 2024
27c1d15
mttkrp unfuseds
kalhankoul96 Feb 1, 2024
36773e8
Merge branch 'mapping_to_cgra' into add_mttkrp_unfuseds
kalhankoul96 Feb 1, 2024
01e0ee8
Empty-Commit
kalhankoul96 Feb 1, 2024
a888fe5
update the mat_elemadd_leakdy_relu graph to rely on metamapper to rem…
bobcheng15 Feb 1, 2024
ef1598f
Merge pull request #116 from weiya711/add_mttkrp_unfuseds
kalhankoul96 Feb 1, 2024
47df739
updated compute_node to support parsing opcode for remmapped complex …
bobcheng15 Feb 1, 2024
7765100
added support for remapping complex op using metamapper, now works fo…
bobcheng15 Feb 1, 2024
6242c3a
fix codestyle
bobcheng15 Feb 1, 2024
d76c95f
fix style again
bobcheng15 Feb 1, 2024
3be2961
Merge remote-tracking branch 'origin/mapping_to_cgra' into sparse_met…
bobcheng15 Feb 6, 2024
ce5ba84
Merge pull request #117 from weiya711/sparse_metamapper
kalhankoul96 Feb 12, 2024
d1ed8f7
1. Adding floating point capabilities to reduce block. 2. Fix for gen…
samidhm Feb 29, 2024
ba9284d
Adding spmm_ijk_crddrop_fp graph (Dense Matrix-Sparse Matrix matmul …
samidhm Feb 29, 2024
cbd63fc
formatting changes
samidhm Mar 1, 2024
634ef57
Merge pull request #118 from weiya711/samidhm_floating_point_add
bobcheng15 Mar 2, 2024
c7689f1
adding floating point graphs
samidhm Mar 7, 2024
f40ce01
Fix the file_name/tensor_name conflict issue by using a more explicit…
pohantw Mar 7, 2024
2df9ff5
revmoe CI step that add conda to system path, switch to pip to instal…
bobcheng15 Mar 7, 2024
fd1786e
allow negative numbers for suitesparse inputs
pohantw Mar 7, 2024
bbcaac4
Merge pull request #119 from weiya711/samidhm_floating_point_add
bobcheng15 Mar 8, 2024
f2c353c
fix the file name matching issue
pohantw Mar 10, 2024
d4a9853
Merge pull request #123 from weiya711/mapping_to_cgra_fix_file_read
pohantw Mar 11, 2024
b3b7015
added the selection for num
Joejoedesu Mar 21, 2024
6c35a5c
style fix
Joejoedesu Mar 21, 2024
9663559
Merge pull request #125 from weiya711/mapping_to_cgra_num
Joejoedesu Mar 22, 2024
dd1b972
removed environment var that skip the formal check for floating point…
bobcheng15 Apr 1, 2024
c4ec544
Merge pull request #126 from weiya711/reenable_sparse_formal_check
bobcheng15 Apr 1, 2024
3ec3a58
added graph for dense to sprse conversion
bobcheng15 Apr 9, 2024
70d2a4b
WIP: sp2dn conversion working, but need to investigate why flush is a…
bobcheng15 Apr 10, 2024
86eb691
removed printing statement and commented code, also remove connection…
bobcheng15 Apr 16, 2024
e5abf90
fix code style
bobcheng15 Apr 16, 2024
ddfc4af
fix code style
bobcheng15 Apr 16, 2024
f70adc5
Merge branch 'dense_scanner_glb_mapping_fix' into add_sparse_dense_co…
bobcheng15 Apr 16, 2024
a9d96da
add graph for mat_elemdiv, update the exponential operation name from…
bobcheng15 Apr 26, 2024
e594051
remove pydata (which no longer exists) from dependency
bobcheng15 Apr 26, 2024
99c17ca
Merge pull request #129 from weiya711/dependency_fix
bobcheng15 Apr 26, 2024
72ad327
Merge branch 'mapping_to_cgra' into add_mat_elemdiv
bobcheng15 Apr 26, 2024
5d35390
fix code style
bobcheng15 Apr 26, 2024
1263936
fix code style
bobcheng15 Apr 26, 2024
91a2915
Merge pull request #128 from weiya711/add_mat_elemdiv
bobcheng15 Apr 27, 2024
274cbbc
remove connection from dense scanner to glb
bobcheng15 Apr 29, 2024
499369f
Merge branch 'mapping_to_cgra' into add_sparse_dense_conversion_graph
bobcheng15 Apr 29, 2024
a8c09c6
fixed code style
bobcheng15 Apr 29, 2024
f839b4c
Merge pull request #130 from weiya711/add_sparse_dense_conversion_graph
bobcheng15 Apr 30, 2024
deecbeb
removed commented code, and added support for mapping the updated den…
bobcheng15 May 6, 2024
cefe8b0
Merge branch 'mapping_to_cgra' into dense_scanner_fix
bobcheng15 May 6, 2024
152ac8e
add code so that the connection between GLB and dense scanner is remo…
bobcheng15 May 7, 2024
efd86e9
updated sam simulation code of the dense scanner so leading stop toke…
bobcheng15 May 10, 2024
68da778
Merge pull request #131 from weiya711/dense_scanner_fix
bobcheng15 May 15, 2024
c1da2da
allow empty line in output sim
kalhankoul96 May 16, 2024
adf6e26
support for back2back aha test with sparse tile pipelining
kalhankoul96 May 17, 2024
084edae
lint fixes
kalhankoul96 May 17, 2024
96cbffb
Merge pull request #132 from weiya711/back2back_apps
kalhankoul96 May 18, 2024
269ac20
update the uncompressed read scanner to allow it to handle maybe tokens
bobcheng15 May 23, 2024
86170d4
Remove egg-info file that was committed by someone who shouldn't have...
weiya711 Jun 6, 2024
1c4240d
Add in masked triangle counting for VLSI demo
weiya711 Jun 6, 2024
5c412ef
initial commit and single app without arbiter
Joejoedesu Jun 12, 2024
bac6f13
updated port names for crddrop in hw_nodes connections
bobcheng15 Jun 13, 2024
5dd43f3
added triangle counting to onyx folder
kalhankoul96 Jun 13, 2024
0673d9f
added formatting code for sparse ml
bobcheng15 Jun 14, 2024
1b52bcf
fix code style
bobcheng15 Jun 14, 2024
4cf6591
Merge pull request #133 from weiya711/crddrop_refactor
bobcheng15 Jun 14, 2024
d830b6f
added todo msg for bfbin2float and float2bfbin functions
bobcheng15 Jun 15, 2024
a2da989
fix code style
bobcheng15 Jun 15, 2024
eaf80ee
Merge pull request #134 from weiya711/sparse-ml-format
bobcheng15 Jun 15, 2024
3bcb231
passing progas in matmul
Joejoedesu Jun 16, 2024
36d5b48
style fix
Joejoedesu Jun 16, 2024
574c381
style fix attempt two
Joejoedesu Jun 16, 2024
b5bf4de
temp fix for lut with _
kalhankoul96 Jun 16, 2024
5ffba07
fix style
kalhankoul96 Jun 16, 2024
b22d902
Merge branch 'mapping_to_cgra' into time_multiplexing
kalhankoul96 Jun 16, 2024
78ff1ac
Merge pull request #136 from weiya711/time_multiplexing
kalhankoul96 Jun 27, 2024
e381f7a
updated matrix generation and parsing code to support reading in floa…
bobcheng15 Jun 27, 2024
da04fa3
fix code style
bobcheng15 Jul 11, 2024
cedc82c
initial commit, hooking up logic
kalhankoul96 Jul 14, 2024
6624fd4
support unroll up to 16
kalhankoul96 Jul 19, 2024
7b594e0
style fixes
kalhankoul96 Jul 19, 2024
a66a660
Merge branch 'mapping_to_cgra' into passthrough_as_buffer
kalhankoul96 Jul 19, 2024
405e802
cleanup
kalhankoul96 Jul 20, 2024
a5d2392
Merge pull request #138 from weiya711/passthrough_as_buffer
kalhankoul96 Jul 20, 2024
115014e
Merge branch 'mapping_to_cgra' into regression_refactor
bobcheng15 Jul 22, 2024
516b686
Merge pull request #137 from weiya711/regression_refactor
bobcheng15 Jul 23, 2024
709718f
fix max size of tile calc
kalhankoul96 Jul 28, 2024
e7a5129
Merge pull request #141 from weiya711/glb_batch_size
kalhankoul96 Jul 29, 2024
df121ca
fix bug handling sam graphs without consectuive numbered nodes
kalhankoul96 Aug 12, 2024
bac356a
sytle fix
kalhankoul96 Aug 12, 2024
47895d2
Merge pull request #142 from weiya711/unroll_bug_fix
kalhankoul96 Aug 12, 2024
76235ce
code cleanup
kalhankoul96 Oct 15, 2024
7624452
Update makefile.yml
kalhankoul96 Oct 15, 2024
0df1737
Update python-package-conda.yml
kalhankoul96 Oct 15, 2024
0809751
lint fix
kalhankoul96 Oct 15, 2024
2151dc7
Merge pull request #143 from weiya711/print_cleanup
kalhankoul96 Oct 16, 2024
bb37e00
Clean up SAM repo
weiya711 Oct 28, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions compiler/sam-outputs/onyx-dot/matmul_ikj.gv
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
digraph SAM {
comment="X=ss01,B=ss01,C=ss01"
20 [comment="type=vectorreducer,index=j" label="VectorReducer j" color=brown shape=box style=filled type="vectorreducer" accum_index="j"]
0 [comment="type=fiberwrite,mode=vals,tensor=X,size=1*B0_dim*C1_dim,sink=true" label="FiberWrite Vals: X" color=green3 shape=box style=filled type="fiberwrite" tensor="X" mode="vals" size="1*B0_dim*C1_dim" sink="true"]
1 [comment="type=fiberwrite,index=j,tensor=X,mode=1,format=compressed,segsize=B0_dim+1,crdsize=B0_dim*C1_dim,sink=true" label="FiberWrite j: X1\ncompressed" color=green3 shape=box style=filled type="fiberwrite" index="j" tensor="X" mode="1" format="compressed" segsize="B0_dim+1" crdsize="B0_dim*C1_dim" sink="true"]
19 [comment="type=fiberlookup,index=i,tensor=B,mode=0,format=compressed,src=true,root=true" label="FiberLookup i: B0\ncompressed" color=green4 shape=box style=filled type="fiberlookup" index="i" tensor="B" mode="0" format="compressed" src="true" root="true"]
18 [comment="type=broadcast" shape=point style=invis type="broadcast"]
2 [comment="type=fiberwrite,index=i,tensor=X,mode=0,format=compressed,segsize=2,crdsize=B0_dim,sink=true" label="FiberWrite i: X0\ncompressed" color=green3 shape=box style=filled type="fiberwrite" index="i" tensor="X" mode="0" format="compressed" segsize="2" crdsize="B0_dim" sink="true"]
17 [comment="type=repsiggen,index=i" label="RepeatSignalGenerator i" color=cyan3 shape=box style=filled type="repsiggen" index="i"]
16 [comment="type=repeat,index=i,tensor=C,root=true" label="Repeat i: C" color=cyan2 shape=box style=filled type="repeat" index="i" tensor="C" root="true"]
15 [comment="type=fiberlookup,index=k,tensor=C,mode=0,format=compressed,src=true,root=false" label="FiberLookup k: C0\ncompressed" color=green4 shape=box style=filled type="fiberlookup" index="k" tensor="C" mode="0" format="compressed" src="true" root="false"]
13 [comment="type=intersect,index=k" label="intersect k" color=purple shape=box style=filled type="intersect" index="k"]
9 [comment="type=repeat,index=j,tensor=B,root=false" label="Repeat j: B" color=cyan2 shape=box style=filled type="repeat" index="j" tensor="B" root="false"]
7 [comment="type=arrayvals,tensor=B" label="Array Vals: B" color=green2 shape=box style=filled type="arrayvals" tensor="B"]
6 [comment="type=mul" label="Mul" color=brown shape=box style=filled type="mul"]
12 [comment="type=fiberlookup,index=j,tensor=C,mode=1,format=compressed,src=true,root=false" label="FiberLookup j: C1\ncompressed" color=green4 shape=box style=filled type="fiberlookup" index="j" tensor="C" mode="1" format="compressed" src="true" root="false"]
11 [comment="type=broadcast" shape=point style=invis type="broadcast"]
10 [comment="type=repsiggen,index=j" label="RepeatSignalGenerator j" color=cyan3 shape=box style=filled type="repsiggen" index="j"]
8 [comment="type=arrayvals,tensor=C" label="Array Vals: C" color=green2 shape=box style=filled type="arrayvals" tensor="C"]
14 [comment="type=fiberlookup,index=k,tensor=B,mode=1,format=compressed,src=true,root=false" label="FiberLookup k: B1\ncompressed" color=green4 shape=box style=filled type="fiberlookup" index="k" tensor="B" mode="1" format="compressed" src="true" root="false"]
19 -> 18 [label="crd" style=dashed type="crd" comment=""]
18 -> 17 [label="crd" style=dashed type="crd" comment=""]
17 -> 16 [label="repsig" style=dotted type="repsig"]
16 -> 15 [label="ref" style=bold type="ref"]
15 -> 13 [label="crd_in-C" style=dashed type="crd" comment="in-C"]
13 -> 9 [label="ref_out-B" style=bold type="ref" comment="out-B"]
9 -> 7 [label="ref" style=bold type="ref"]
7 -> 6 [label="val" type="val"]
13 -> 12 [label="ref_out-C" style=bold type="ref" comment="out-C"]
12 -> 11 [label="crd" style=dashed type="crd" comment=""]
19 -> 2 [label="crd_i" style=dashed type="crd" comment="i"]
11 -> 20 [label="crd_j" style=dashed type="crd" comment="j" special="true"]
11 -> 10 [label="crd" style=dashed type="crd" comment=""]
10 -> 9 [label="repsig" style=dotted type="repsig"]
12 -> 8 [label="ref" style=bold type="ref" comment=""]
8 -> 6 [label="val" type="val"]
15 -> 13 [label="ref_in-C" style=bold type="ref" comment="in-C"]
19 -> 14 [label="ref" style=bold type="ref" comment=""]
14 -> 13 [label="crd_in-B" style=dashed type="crd" comment="in-B"]
14 -> 13 [label="ref_in-B" style=bold type="ref" comment="in-B"]
6 -> 20 [label="mul_val_out" type="val"]
20 -> 0 [label="final_vals" type="val"]
20 -> 1 [label="crd_out-j" style=dashed type="crd" comment="out-j"]
}
12 changes: 10 additions & 2 deletions sam/onyx/hw_nodes/buffet_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,16 @@ def configure(self, attributes):
cap0 = kratos.clog2(capacity_0) - fetch_width_log
cap1 = kratos.clog2(capacity_1) - fetch_width_log

if 'vector_reduce_mode' in attributes:
is_in_vr_mode = attributes['vector_reduce_mode'].strip('"')
if is_in_vr_mode == "true":
vr_mode = 1
else:
vr_mode = 0

cfg_kwargs = {
'capacity_0': cap0,
'capacity_1': cap1
'capacity_1': cap1,
'vr_mode': vr_mode
}
return (capacity_0, capacity_1), cfg_kwargs
return (capacity_0, capacity_1, vr_mode), cfg_kwargs
14 changes: 9 additions & 5 deletions sam/onyx/hw_nodes/compute_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,12 +62,16 @@ def connect(self, other, edge, kwargs=None):
pe = self.get_name()
# isect_conn = other.get_num_inputs()

if 'tensor' not in edge.get_attributes():
# Taking some liberties here - but technically this is the combo val
# isect_conn = other.get_connection_from_tensor('B')
isect_conn = other.get_connection_from_tensor('C')
if 'vector_reduce_mode' in edge.get_attributes():
if edge.get_attributes()['vector_reduce_mode']:
isect_conn = 0
else:
isect_conn = other.get_connection_from_tensor(edge.get_tensor())
if 'tensor' not in edge.get_attributes():
# Taking some liberties here - but technically this is the combo val
# isect_conn = other.get_connection_from_tensor('B')
isect_conn = other.get_connection_from_tensor('C')
else:
isect_conn = other.get_connection_from_tensor(edge.get_tensor())

new_conns = {
f'pe_to_isect_{in_str}_{isect_conn}': [
Expand Down
9 changes: 6 additions & 3 deletions sam/onyx/hw_nodes/fiberaccess_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,9 +228,12 @@ def configure(self, attributes, flavor):

cfg_tuple, cfg_kwargs = self.get_flavor(flavor=flavor).configure(attributes)
cfg_kwargs['flavor'] = flavor
print("THESE ARE MY CONFIG KWARGS")
print(cfg_kwargs)
# breakpoint()

vr_mode = 0
cfg_tuple += (vr_mode,)
cfg_kwargs["vr_mode"] = vr_mode
# vr_mode = 0
# cfg_tuple += (vr_mode,)
# cfg_kwargs["vr_mode"] = vr_mode

return cfg_tuple, cfg_kwargs
2 changes: 1 addition & 1 deletion sam/onyx/hw_nodes/hw_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ class HWNodeType(Enum):
Broadcast = 12
RepSigGen = 13
CrdHold = 14
SpAccumulator = 15
VectorReducer = 15
FiberAccess = 16


Expand Down
13 changes: 11 additions & 2 deletions sam/onyx/hw_nodes/intersect_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,7 @@ def connect(self, other, edge, kwargs=None):
print(edge.get_attributes())
edge_comment = edge.get_attributes()['comment'].strip('"')
tensor = edge_comment.split('-')[1]
print(self.tensor_to_conn)
out_conn = self.tensor_to_conn[tensor]
compute_conn = compute.get_num_inputs()
new_conns = {
Expand Down Expand Up @@ -248,6 +249,14 @@ def configure(self, attributes):
cmrg_enable = 0
cmrg_stop_lvl = 0
type_op = attributes['type'].strip('"')

if 'vector_reduce_mode' in attributes:
is_in_vr_mode = attributes['vector_reduce_mode'].strip('"')
if is_in_vr_mode == "true":
vr_mode = 1
else:
vr_mode = 0

if type_op == "intersect":
op = JoinerOp.INTERSECT.value
elif type_op == "union":
Expand All @@ -258,6 +267,6 @@ def configure(self, attributes):
'cmrg_enable': cmrg_enable,
'cmrg_stop_lvl': cmrg_stop_lvl,
'op': op,
'vr_mode': 0
'vr_mode': vr_mode
}
return (cmrg_enable, cmrg_stop_lvl, op, 0), cfg_kwargs
return (cmrg_enable, cmrg_stop_lvl, op, vr_mode), cfg_kwargs
11 changes: 10 additions & 1 deletion sam/onyx/hw_nodes/merge_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,16 @@ def connect(self, other, edge, kwargs=None):

return new_conns
elif other_type == IntersectNode:
raise NotImplementedError(f'Cannot connect MergeNode to {other_type}')
isect = other.get_name()
print("MERGE TO UNION FOR VECTOR REDUCE")
new_conns = {
f'merge_to_union_inner': [
([(merge, f"cmrg_coord_out_{0}"), (isect, f"coord_in_{0}")], 17),
]
}

return new_conns
# raise NotImplementedError(f'Cannot connect MergeNode to {other_type}')
elif other_type == ReduceNode:
# raise NotImplementedError(f'Cannot connect MergeNode to {other_type}')
other_red = other.get_name()
Expand Down
37 changes: 27 additions & 10 deletions sam/onyx/hw_nodes/read_scanner_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,9 @@ def connect(self, other, edge, kwargs=None):
edge_attr = edge.get_attributes()
if 'use_alt_out_port' in edge_attr:
out_conn = 'block_rd_out'
elif ('vector_reduce_mode' in edge_attr):
if (edge_attr['vector_reduce_mode']):
out_conn = 'pos_out'
else:
out_conn = 'coord_out'

Expand All @@ -102,7 +105,13 @@ def connect(self, other, edge, kwargs=None):
elif other_type == IntersectNode:
# Send both....
isect = other.get_name()
isect_conn = other.get_connection_from_tensor(self.get_tensor())
if 'vector_reduce_mode' in edge.get_attributes():
if edge.get_attributes()['vector_reduce_mode']:
isect_conn = 1
elif 'special' in edge.get_attributes():
isect_conn = 0
else:
isect_conn = other.get_connection_from_tensor(self.get_tensor())

e_attr = edge.get_attributes()
# isect_conn = 0
Expand Down Expand Up @@ -247,12 +256,12 @@ def configure(self, attributes):
dim_size = 1
stop_lvl = 0

if 'spacc' in attributes:
spacc_mode = 1
assert 'stop_lvl' in attributes
stop_lvl = int(attributes['stop_lvl'].strip('"'))
else:
spacc_mode = 0
# if 'spacc' in attributes:
# spacc_mode = 1
# assert 'stop_lvl' in attributes
# stop_lvl = int(attributes['stop_lvl'].strip('"'))
# else:
# spacc_mode = 0

# This is a fiberwrite's opposing read scanner for comms with GLB
if attributes['type'].strip('"') == 'fiberwrite':
Expand Down Expand Up @@ -283,6 +292,13 @@ def configure(self, attributes):
lookup = 0
block_mode = int(attributes['type'].strip('"') == 'fiberwrite')

if 'vector_reduce_mode' in attributes:
is_in_vr_mode = attributes['vector_reduce_mode'].strip('"')
if is_in_vr_mode == "true":
vr_mode = 1
else:
vr_mode = 0

cfg_kwargs = {
'dense': dense,
'dim_size': dim_size,
Expand All @@ -294,11 +310,12 @@ def configure(self, attributes):
'do_repeat': do_repeat,
'repeat_outer': repeat_outer,
'repeat_factor': repeat_factor,
'stop_lvl': stop_lvl,
# 'stop_lvl': stop_lvl,
'block_mode': block_mode,
'lookup': lookup,
'spacc_mode': spacc_mode
# 'spacc_mode': spacc_mode
'vr_mode': vr_mode
}

return (inner_offset, max_outer_dim, strides, ranges, is_root, do_repeat,
repeat_outer, repeat_factor, stop_lvl, block_mode, lookup, spacc_mode), cfg_kwargs
repeat_outer, repeat_factor, block_mode, lookup, vr_mode), cfg_kwargs
31 changes: 20 additions & 11 deletions sam/onyx/hw_nodes/write_scanner_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ def connect(self, other, edge, kwargs=None):
def configure(self, attributes):

stop_lvl = 0
init_blank = 0

# compressed = int(attributes['format'] == 'compressed')
if 'format' in attributes and 'vals' in attributes['format'].strip('"'):
Expand All @@ -89,14 +90,14 @@ def configure(self, attributes):
else:
compressed = 1

if 'spacc' in attributes:
spacc_mode = 1
init_blank = 1
assert 'stop_lvl' in attributes
stop_lvl = int(attributes['stop_lvl'].strip('"'))
else:
spacc_mode = 0
init_blank = 0
# if 'spacc' in attributes:
# spacc_mode = 1
# init_blank = 1
# assert 'stop_lvl' in attributes
# stop_lvl = int(attributes['stop_lvl'].strip('"'))
# else:
# spacc_mode = 0
# init_blank = 0

# compressed = int(attributes['format'] == 'compressed')
if attributes['type'].strip('"') == 'arrayvals':
Expand All @@ -112,16 +113,24 @@ def configure(self, attributes):
else:
block_mode = 0

if 'vector_reduce_mode' in attributes:
is_in_vr_mode = attributes['vector_reduce_mode'].strip('"')
if is_in_vr_mode == "true":
vr_mode = 1
else:
vr_mode = 0

# block_mode = int(attributes['type'].strip('"') == 'fiberlookup')
# cfg_tuple = (inner_offset, compressed, lowest_level, stop_lvl, block_mode)
cfg_tuple = (compressed, lowest_level, stop_lvl, block_mode, init_blank, spacc_mode)
cfg_tuple = (compressed, lowest_level, stop_lvl, block_mode, vr_mode, init_blank)
cfg_kwargs = {
# 'inner_offset': inner_offset,
'compressed': compressed,
'lowest_level': lowest_level,
'stop_lvl': stop_lvl,
'block_mode': block_mode,
'init_blank': init_blank,
'spacc_mode': spacc_mode
'vr_mode': vr_mode,
'init_blank': init_blank
# 'spacc_mode': spacc_mode
}
return cfg_tuple, cfg_kwargs
Loading