Skip to content

MemoryFS example throughput #31

@moreaki

Description

@moreaki

G'day and thank you for this interesting project. I have just checked it out and fired up

$ ./examples/memoryfs.sh ~/memoryfs 1>/dev/null 2>&1

Having written file systems before using C and FUSE for Mac, Linux and Windows, I immediately hit the following combo:

$ gdd if=/dev/zero of=~/50M bs=1M count=50
50+0 records in
50+0 records out
52428800 bytes (52 MB) copied, 0.420642 s, 125 MB/s
$ gdd if=/dev/zero of=~/memoryfs/50M bs=1M count=50
50+0 records in
50+0 records out
52428800 bytes (52 MB) copied, 32.0672 s, 1.6 MB/s
$ uname -a
Darwin surimacpro 13.0.0 Darwin Kernel Version 13.0.0: Thu Sep 19 22:22:27 PDT 2013; root:xnu-2422.1.72~6/RELEASE_X86_64 x86_64

Much to my surprise, the result of the simple (and non-conclusive) write test was significantly lower than my expectations. I haven't looked at the code nor did I read much of the other people's experiences, though I would have expected this to be the same cached throughput for both tests.

Care to share your ideas as to why this test ran with such a surprisingly low throughput?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions