Skip to content
This repository has been archived by the owner on Aug 2, 2021. It is now read-only.

LazyChunkReader.ReadAt panics if depth>0 #344

Closed
gbalint opened this issue Mar 23, 2018 · 4 comments
Closed

LazyChunkReader.ReadAt panics if depth>0 #344

gbalint opened this issue Mar 23, 2018 · 4 comments
Assignees

Comments

@gbalint
Copy link

gbalint commented Mar 23, 2018

panic: runtime error: slice bounds out of range

goroutine 196 [running]:
github.com/ethereum/go-ethereum/swarm/storage.(*LazyChunkReader).join.func1(0xc42022d300, 0x6c, 0x1008, 0xc4200a00e0, 0xc42001c5a0, 0x0, 0xc420228008, 0xc42001c540, 0xc4200187a8, 0xc420090800, ...)
/Users/gbalint/go/src/github.com/ethereum/go-ethereum/swarm/storage/chunker.go:449 +0x3f0
created by github.com/ethereum/go-ethereum/swarm/storage.(*LazyChunkReader).join
/Users/gbalint/go/src/github.com/ethereum/go-ethereum/swarm/storage/chunker.go:448 +0x2f8
exit status 2

The problem is in this code:
https://github.com/ethersphere/go-ethereum/blob/swarm-network-rewrite/swarm/storage/chunker.go#L377

for d := 0; d < self.depth; d++ {
	off *= self.chunkSize
	length *= self.chunkSize
}
@zelig
Copy link
Member

zelig commented Mar 23, 2018

so this should be fixed in the context of lighht node requests using streamer.
for situations where light server wants to retrieve hashes only to offer to a light client.
Offset and length (stream range) is interpreted as the sequential index of datachunks but the chunker random access operates on data layer offset and bytelength.

Depth here is not a good interface, since the only usecase is the level 0 datastream or level 1 hash (reference) stream. therefore the API should just accept a boolean param to control this.

if self.hashOnly {
    off *= self.chunkSize
    length *= self.chunkSize
}

Then on line https://github.com/ethersphere/go-ethereum/blob/swarm-network-rewrite/swarm/storage/chunker.go#L409

if  self.hashOnly && depth == 1 {
...

gbalint added a commit that referenced this issue Mar 28, 2018
gbalint added a commit that referenced this issue Mar 30, 2018
* swarm: Integrate encryption into dpa and chunker

This is not a fully functional change.
Chunker api changed: it uses Putter and Getter interfaces to put and get
chunks from the store and to do the encryption/decryption if needed. These
putters and getters should also the hashing, so hashing related code
could be removed from chunker.
Some things are temporary, pyramid chunker doesn't work yet (only tree
chunker), and hashing and putting in dpa is not parallelized.

* swarm/storage: Use TreeChunker in DPA

PyramydChunker doesn't work yet with Putter/Getter

* swarm/storage: HasherStore needs a new hasher instance on every chunk
creation

This fixes the tests with pyramid chunker

* swarm/storage: Add back tree/pyramid comparison in chunker test

* swarm: Refactor chunker, new chunker instance is needed for each job

Chunker code is much simpler if all data of the job is stored on the
chunker object itself.

* swarm/storage: Remove commented code

* swarm/storage: Added comments

* cmd/swarm: Remove Branches from config

* swarm/storage: Remove commented code

* swarm/network: Remove chunker from syncer

* swarm/storage: Remove Chunker/Splitter/Joiner interfaces

They are not used anymore

* swarm/storage: Fix comment

* swarm/storage: Reenable chunker_test.TestDataAppend

* swarm/storage: Fix comments

* swarm/storage, cmd/swarm: Add FakeChunkStore

Instead of using a nil ChunkStore in hasherStore when we don't want the
hasherStore to actually store the chunk data, now a FakeChunkStore
instance should be used. Because of this the nil checks of the hasherStore.store
could be removed.

* swarm/storage: Use DefaultChunkSize az constant, not DefaultBranches

The branches what is changing, and it is dependent on chunk size and
hash size

* swarm/storage: Remove unnecessary return statement

* swarm/storage: Rename unfinishedChunk to unfinishedChunkData

* swarm/storage: Move NewTreeSplitterParams to chunker.go

It was in pyramid.go, although TreeSplitter is in chunker.go

* swarm/storage: Minor cleanup

incrementWorkerCount can be moved into runWorker
remove obsolete comment

* swarm/storage: Add TODO comment to chunker depth parameter

Depth can only be 0 because of a bug: #344

* swarm/storage: Fix padding in hasherStore

There was a bug that data was not lengthened to padding

* swarm/storage: Add unit test for hasherStore

* swarm/storage: Decrease initial counter in span encryption

* swarm/storage: Renames in benchmark tests

* swarm/storage: Fixed pyramid benchmark test

* swarm/storage: Convert TestHasherStore to table-driven test

* swarm/storage, cmd/swarm: New MapChunkStore, simple ChunkStore implementation

It is not just the test which needs this kind of implementation

* swarm/storage: Cleanup on hashSize/chunkSize handling
@zelig
Copy link
Member

zelig commented Apr 19, 2018

is this still failing?

@gbalint
Copy link
Author

gbalint commented Apr 20, 2018

Sure, we haven't worked on this.

@gbalint
Copy link
Author

gbalint commented Apr 20, 2018

See also #413

@gbalint gbalint self-assigned this Apr 25, 2018
@acud acud closed this as completed May 28, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants