Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ipfs to WebDAV bridge #31

Closed
kpmy opened this issue Sep 13, 2016 · 7 comments
Closed

ipfs to WebDAV bridge #31

kpmy opened this issue Sep 13, 2016 · 7 comments

Comments

@kpmy
Copy link

kpmy commented Sep 13, 2016

Hi.
For the purpose of easy handling with files in unixfs i'm building a middleware for golang/x/net/webdav server that can provide read/write access to one of ipfs unixfs-dir.

https://github.com/kpmy/mipfs

For now it can handle basic file operations, there is still some trouble with locks, authorization, file properties, server-side copy/move and large files (maybe this problem can be separated from this project itself).

for testing purposes i have vds with global no registration test service http://d.ocsf.in:6001/ipfs
you can try it with cadaver on linux

cadaver http://d.ocsf.in:6001/ipfs/

also you can get root hash of webdav directory.

http://d.ocsf.in:6001/hash

no guaranties, of course 😄

@jbenet
Copy link
Member

jbenet commented Sep 15, 2016

Very cool :)

On Tue, Sep 13, 2016 at 4:27 AM, κρμγ notifications@github.com wrote:

Hi.
For the purpose of easy handling with files in unixfs i'm building a
middleware for golang/x/net/webdav server that can provide read/write
access to one of ipfs unixfs-dir.

https://github.com/kpmy/mipfs

For now it can handle basic file operations, there is still some trouble
with locks, authorization, file properties, server-side copy/move and large
files (maybe this problem can be separated from this project itself).

for testing purposes i have vds with global no registration test service
http://d.ocsf.in:6001/ipfs
you can try it with cadaver on linux

cadaver http://d.ocsf.in:6001/ipfs/

also you can get root hash of webdav directory.

http://d.ocsf.in:6001/hash

no guaranties, of course 😄


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#31, or mute the thread
https://github.com/notifications/unsubscribe-auth/AAIcoUs_VmwQMl9OuvKbHnRnaHNYLaR0ks5qpl5ZgaJpZM4J7afa
.

@kpmy
Copy link
Author

kpmy commented Sep 17, 2016

Just in case, is there any analogue of go-ipfs-api.Shell inside go-ipfs, without http api in between?

@jbenet
Copy link
Member

jbenet commented Sep 17, 2016

Yeah, this is the core packages. They need some work/love to be nicer,
but you can do everything there.

On Sun, Sep 18, 2016 at 3:22 AM, κρμγ notifications@github.com wrote:

Just in case, is there any analogue of go-ipfs-api.Shell inside go-ipfs,
without http api in between?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#31 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAIcobWdu4vm1PpViTVVMgLMoZNjJApqks5qrD3xgaJpZM4J7afa
.

@ghost
Copy link

ghost commented Sep 17, 2016

Actually what your looking for is the Core API, but for go-ipfs it's still a work in progress: https://github.com/ipfs/go-ipfs/issues?utf8=%E2%9C%93&q=milestone%3A%22IPFS%20Core%20API%22%20

@kpmy
Copy link
Author

kpmy commented Sep 17, 2016

It feels great, thnx.

@kpmy
Copy link
Author

kpmy commented Sep 22, 2016

For now I'm using go-ipfs-api, that interacts with ipfs daemon over http rest api. It looks that frequent requests to API become a bottle neck for webdav middleware.

@kpmy kpmy closed this as completed Sep 22, 2016
@kpmy kpmy reopened this Sep 22, 2016
@kpmy
Copy link
Author

kpmy commented Sep 22, 2016

Well, after some optimisations the situation become quite better, but then I put IPFS_REPO on a ram-disk and it was like I'm in heaven. IO does matter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants