Skip to content

BFS Push-pull, SSSP, SpMSpV #83

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 75 commits into from
Oct 16, 2023
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
75 commits
Select commit Hold shift + click to select a range
c45d651
add BFS (benchmarkdotnet) benchmark result for b9ef28731a1f9a4549e7b8…
Mar 7, 2023
ec833fb
add BFS (benchmarkdotnet) benchmark result for b9ef28731a1f9a4549e7b8…
Mar 7, 2023
df59d19
add BFS (benchmarkdotnet) benchmark result for c7bb9b1addaf82693381e3…
Mar 25, 2023
fdebf18
add BFS (benchmarkdotnet) benchmark result for abdb353aa170e7693660ff…
Mar 25, 2023
9359530
add BFS (benchmarkdotnet) benchmark result for 7885e8ae7c843c91d05171…
Mar 25, 2023
346846f
add BFS (benchmarkdotnet) benchmark result for e850a49b54523883645408…
Apr 7, 2023
97d7743
add BFS (benchmarkdotnet) benchmark result for e0485c52128b16b0ba4668…
Apr 10, 2023
578c537
add BFS (benchmarkdotnet) benchmark result for cb8a79181709f3d138b077…
Apr 28, 2023
8d28782
Merge pull request #81 from YaccConstructor/dev
gsvgit May 16, 2023
f1dfc23
Turn API docs generation on.
gsvgit Apr 9, 2023
af19a02
Dependencies updated in order to fix CI buid (log file format problem)
gsvgit May 16, 2023
564acf9
Release branch is master not main.
gsvgit May 16, 2023
be3395a
Documentation release of version 0.1.0
gsvgit May 16, 2023
1584013
Automate Release and Publish process (publish on CI)
gsvgit May 17, 2023
0305361
Bump version to 0.1.0-alpha1
gsvgit May 18, 2023
c5a0406
Fix misprint.
gsvgit May 18, 2023
cf49e7a
Configure separated builds in both debug and release modes.
gsvgit May 18, 2023
8a3b796
Correct misprint in workflow configuration yaml.
gsvgit May 18, 2023
41cdc46
Updated names of jobs (Release - Debug modes)
gsvgit May 18, 2023
f02ef72
Release docks on CI
gsvgit May 18, 2023
6f08dad
SSSP dense
kirillgarbar May 20, 2023
38ceee3
Radix returns sorted keys
kirillgarbar May 20, 2023
0d09177
SpMSpV bool only
kirillgarbar May 20, 2023
a97631a
Choose with keys
kirillgarbar May 20, 2023
a87949e
SegReduce without offsets
kirillgarbar May 20, 2023
7c22108
SpMSpV general
kirillgarbar May 21, 2023
d3bc08c
None on empty vectors
kirillgarbar May 21, 2023
d6b9fb5
BFS Push-pull and push
kirillgarbar May 21, 2023
808ea01
Paket lock update
kirillgarbar May 21, 2023
91513e1
Option.map
kirillgarbar May 22, 2023
9b6fd09
Radix version returning values only
kirillgarbar May 22, 2023
5a62156
Remove unused methods
kirillgarbar May 22, 2023
90fe441
ClArray.count uses map
kirillgarbar May 22, 2023
1f54823
Free
kirillgarbar May 22, 2023
c2db0de
Upper case in error messages
kirillgarbar May 22, 2023
8358a15
Fix comments in Arithmetic
kirillgarbar May 22, 2023
53fba87
Dataset folder set to default
kirillgarbar May 23, 2023
56f8097
Merge branch 'dev' into dev
kirillgarbar May 23, 2023
90a1efe
Paket lock fix
kirillgarbar May 23, 2023
0c5db3f
refactor: comments; common api
artemiipatov Jul 18, 2023
7ec498d
wip: refactor
artemiipatov Jul 18, 2023
5d900f5
refactor: namespace/module names
artemiipatov Jul 21, 2023
4c21e7e
refactor: add comments
artemiipatov Jul 22, 2023
be8c888
refactor: namespace/module names
artemiipatov Jul 22, 2023
40e3e3a
refactor: assembly attributes
artemiipatov Jul 23, 2023
d587397
refactor: readme.md
artemiipatov Jul 23, 2023
4c5dede
refactor: objects comments
artemiipatov Jul 23, 2023
45baacd
refactor: move Matrix.map, Vector.map; add: Vector.map tests
artemiipatov Jul 24, 2023
4841a03
refactor: vector.map tests
artemiipatov Jul 24, 2023
da2d1bd
fix: exception related to the use of bitonic
artemiipatov Jul 24, 2023
203d641
refactor: returned internal comments
artemiipatov Jul 26, 2023
04efa1e
fix: common
artemiipatov Jul 26, 2023
b9d202f
refactor: common; docs for matrix
artemiipatov Jul 27, 2023
6c0a71f
refactor: formatting
artemiipatov Jul 27, 2023
2d86b35
fix: blit, choose2 tests exception
artemiipatov Jul 29, 2023
51bb2b4
fix: kronecker memory leaks
artemiipatov Jul 31, 2023
779c8de
refactor: maxAllocSize: unit64
artemiipatov Aug 7, 2023
bb708d0
refactor: tests count
artemiipatov Aug 7, 2023
3ceb397
Move Map and Bitmap methods to separate modules to ease dependencies
kirillgarbar Sep 10, 2023
3616f94
SSSP optimization, using front as mask
kirillgarbar Sep 10, 2023
77d6fe5
Reduce instead of prefixSum
kirillgarbar Sep 10, 2023
a6a068d
Release collectedRows, Option.bind
kirillgarbar Sep 10, 2023
2f38f74
Non-blocking dispose
kirillgarbar Sep 10, 2023
1302d8e
Paket lock update
kirillgarbar Sep 10, 2023
eb82c53
refactor: matrix, vector comments
artemiipatov Sep 29, 2023
e187916
fix: spgemm negative maxAllocSize
artemiipatov Sep 30, 2023
441a2f9
fix: spgemm maxAllocSize calculation
artemiipatov Sep 30, 2023
8a79b41
Merge pull request #84 from artemiipatov/refactor
gsvgit Oct 2, 2023
48404f9
Merge remote-tracking branch 'YaccConstructor/master' into pr
kirillgarbar Oct 5, 2023
60500e9
Merge finish
kirillgarbar Oct 5, 2023
f4484a9
Enable all tests
kirillgarbar Oct 5, 2023
c6232d3
Format Program.fs
kirillgarbar Oct 5, 2023
ca7b275
paket.lock update
kirillgarbar Oct 5, 2023
53c6715
Encapsulate ClCell management inside operation
kirillgarbar Oct 10, 2023
d2951d4
mkNumericSumAsMul allow max length paths
kirillgarbar Oct 14, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
265 changes: 132 additions & 133 deletions paket.lock

Large diffs are not rendered by default.

167 changes: 167 additions & 0 deletions src/GraphBLAS-sharp.Backend/Algorithms/BFS.fs
Original file line number Diff line number Diff line change
Expand Up @@ -71,3 +71,170 @@ module BFS =

levels
| _ -> failwith "Not implemented"

let singleSourceSparse
(add: Expr<bool option -> bool option -> bool option>)
(mul: Expr<bool option -> bool option -> bool option>)
(clContext: ClContext)
workGroupSize
=

let spMSpV =
SpMSpV.run add mul clContext workGroupSize

let zeroCreate =
ClArray.zeroCreate clContext workGroupSize

let ofList = Vector.ofList clContext workGroupSize

let maskComplemented =
Vector.Sparse.Vector.map2SparseDense Mask.complementedOp clContext workGroupSize

let fillSubVectorTo =
Vector.assignBySparseMaskInPlace (Convert.assignToOption Mask.assign) clContext workGroupSize

fun (queue: MailboxProcessor<Msg>) (matrix: ClMatrix.CSR<bool>) (source: int) ->
let vertexCount = matrix.RowCount

let levels = zeroCreate queue HostInterop vertexCount

let mutable frontier =
ofList queue DeviceOnly Sparse vertexCount [ source, true ]

let mutable level = 0
let mutable stop = false

while not stop do
match frontier with
| ClVector.Sparse front ->
level <- level + 1

//Assigning new level values
fillSubVectorTo queue levels front (clContext.CreateClCell level) levels

//Getting new frontier
match spMSpV queue matrix front with
| None ->
frontier.Dispose queue
stop <- true
| Some newFrontier ->
frontier.Dispose queue
//Filtering visited vertices
match maskComplemented queue DeviceOnly newFrontier levels with
| None ->
stop <- true
newFrontier.Dispose queue
| Some f ->
frontier <- ClVector.Sparse f
newFrontier.Dispose queue

| _ -> failwith "Not implemented"

levels


let singleSourcePushPull
(add: Expr<bool option -> bool option -> bool option>)
(mul: Expr<bool option -> bool option -> bool option>)
(clContext: ClContext)
workGroupSize
=

let SPARSITY = 0.001f

let push nnz size =
(float32 nnz) / (float32 size) <= SPARSITY

let spMVTo =
SpMV.runTo add mul clContext workGroupSize

let spMSpV =
SpMSpV.runBoolStandard add mul clContext workGroupSize

let zeroCreate =
ClArray.zeroCreate clContext workGroupSize

let ofList = Vector.ofList clContext workGroupSize

let maskComplementedTo =
Vector.map2InPlace Mask.complementedOp clContext workGroupSize

let maskComplemented =
Vector.Sparse.Vector.map2SparseDense Mask.complementedOp clContext workGroupSize

let fillSubVectorDenseTo =
Vector.assignByMaskInPlace (Convert.assignToOption Mask.assign) clContext workGroupSize

let fillSubVectorSparseTo =
Vector.assignBySparseMaskInPlace (Convert.assignToOption Mask.assign) clContext workGroupSize

let toSparse = Vector.toSparse clContext workGroupSize

let toDense = Vector.toDense clContext workGroupSize

let countNNZ =
ClArray.count Predicates.isSome clContext workGroupSize

fun (queue: MailboxProcessor<Msg>) (matrix: ClMatrix.CSR<bool>) (source: int) ->
let vertexCount = matrix.RowCount

let levels = zeroCreate queue HostInterop vertexCount

let mutable frontier =
ofList queue DeviceOnly Sparse vertexCount [ source, true ]

let mutable level = 0
let mutable stop = false

while not stop do
level <- level + 1

match frontier with
| ClVector.Sparse front ->
//Assigning new level values
fillSubVectorSparseTo queue levels front (clContext.CreateClCell level) levels

//Getting new frontier
match spMSpV queue matrix front with
| None ->
frontier.Dispose queue
stop <- true
| Some newFrontier ->
frontier.Dispose queue
//Filtering visited vertices
match maskComplemented queue DeviceOnly newFrontier levels with
| None ->
stop <- true
newFrontier.Dispose queue
| Some f ->
newFrontier.Dispose queue

//Push/pull
if (push f.NNZ f.Size) then
frontier <- ClVector.Sparse f
else
frontier <- toDense queue DeviceOnly (ClVector.Sparse f)
f.Dispose queue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A large number of releases, which, if I'm not mistaken, are blocking. Can this cause a loss in performance?

| ClVector.Dense front ->
//Assigning new level values
fillSubVectorDenseTo queue levels front (clContext.CreateClCell level) levels

//Getting new frontier
spMVTo queue matrix front front

maskComplementedTo queue front levels front

//Emptiness check
let NNZ = countNNZ queue front

stop <- NNZ = 0

//Push/pull
if not stop then
if (push NNZ front.Length) then
frontier <- ClVector.Sparse(toSparse queue DeviceOnly front)
front.Free queue
else
frontier.Dispose queue

levels
95 changes: 95 additions & 0 deletions src/GraphBLAS-sharp.Backend/Algorithms/SSSP.fs
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
namespace GraphBLAS.FSharp.Backend.Algorithms

open GraphBLAS.FSharp.Backend
open Brahma.FSharp
open FSharp.Quotations
open GraphBLAS.FSharp.Backend.Objects
open GraphBLAS.FSharp.Backend.Common
open GraphBLAS.FSharp.Backend.Quotes
open GraphBLAS.FSharp.Backend.Vector
open GraphBLAS.FSharp.Backend.Vector.Dense
open GraphBLAS.FSharp.Backend.Objects.ClContext
open GraphBLAS.FSharp.Backend.Objects.ArraysExtensions
open GraphBLAS.FSharp.Backend.Objects.ClCell

module SSSP =
let run (clContext: ClContext) workGroupSize =

let less = ArithmeticOperations.less<int>
let min = ArithmeticOperations.min<int>
let plus = ArithmeticOperations.intSumAsMul

let spMVTo =
SpMV.runTo min plus clContext workGroupSize

let create = ClArray.create clContext workGroupSize

let createMask = ClArray.create clContext workGroupSize

let ofList = Vector.ofList clContext workGroupSize

let eWiseMulLess =
ClArray.map2InPlace less clContext workGroupSize

let eWiseAddMin =
ClArray.map2InPlace min clContext workGroupSize

let fillSubVectorTo =
Vector.assignByMaskInPlace (Convert.assignToOption Mask.assignComplemented) clContext workGroupSize

let containsNonZero =
ClArray.exists Predicates.isSome clContext workGroupSize

fun (queue: MailboxProcessor<Msg>) (matrix: ClMatrix.CSR<int>) (source: int) ->
let vertexCount = matrix.RowCount

//None is System.Int32.MaxValue
let distance =
ofList queue DeviceOnly Dense vertexCount [ source, 0 ]

let mutable f1 =
ofList queue DeviceOnly Dense vertexCount [ source, 0 ]

let mutable f2 =
create queue DeviceOnly vertexCount None
|> ClVector.Dense

let m =
createMask queue DeviceOnly vertexCount None
|> ClVector.Dense

let mutable stop = false

while not stop do
match f1, f2, distance, m with
| ClVector.Dense front1, ClVector.Dense front2, ClVector.Dense dist, ClVector.Dense mask ->
//Getting new frontier
spMVTo queue matrix front1 front2

//Checking which distances were updated
eWiseMulLess queue front2 dist mask
//Updating
eWiseAddMin queue dist front2 dist

//Filtering unproductive vertices
fillSubVectorTo queue front2 mask (clContext.CreateClCell 0) front2

//Swap fronts
let temp = f1
f1 <- f2
f2 <- temp

//Checking if no distances were updated
stop <-
not
<| (containsNonZero queue mask).ToHostAndFree(queue)

| _ -> failwith "not implemented"

f1.Dispose queue
f2.Dispose queue
m.Dispose queue

match distance with
| ClVector.Dense dist -> dist
| _ -> failwith "not implemented"
Loading