-
Notifications
You must be signed in to change notification settings - Fork 3
feat: match spec #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
License: MIT Signed-off-by: Henrique Dias <hacdias@gmail.com>
This comment has been minimized.
This comment has been minimized.
License: MIT Signed-off-by: Henrique Dias <hacdias@gmail.com>
This comment has been minimized.
This comment has been minimized.
|
Let's block this until ipfs-shipyard/cohosting#6 is solved! |
License: MIT Signed-off-by: Henrique Dias <hacdias@gmail.com>
|
Thanks to @olizilla for passing this over to Shipyard! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good (assuming small suggestions inline are addresed).
Ok to merge, but we should not release this to NPM until we address some things from #6 (comment), namely:
- CLI: reusing existing HTTP API by default
- API: deciding how to ensure small bundle (splitting CLI and API into separate packages?)
Anyway, discussion about those happens in #6
Co-Authored-By: Marcin Rataj <lidel@lidel.org>
License: MIT Signed-off-by: Henrique Dias <hacdias@gmail.com>
License: MIT Signed-off-by: Henrique Dias <hacdias@gmail.com>
autonome
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of minor things noted in the comments, but otherwise looks generally fine.
Three areas I have questions about, required to take this tool to production level, but can take care of in later issues after this initial version merges:
-
Is there any input validation? I see lots of processing domain names but no checking the validity of them.
-
There isn't much in the way of error handling. Eg, what happens when network goes down/up, or operation returns error from IPFS for some reason - where do we catch it and how is it communicated to the user.
-
We need tests added, and written in a way that can be plugged into the automation matrix so can be run as regression check for future go/js IPFS releases (once we know what that way is).
Not much, I'm just letting ipfs handle it. But we can improve this later, for sure.
True, we're just printing it.
For sure! Will create issues for this. |
|
Just waiting for the final call of @lidel on the |
|
I'm merging this 😃 If there's anything need fixing, can be done on a different PR, |
|
|
||
| ```console | ||
| $ ipfs-cohost dist.ipfs.io tr.wikipedia-on-ipfs.org --no-pin | ||
| $ ipfs-cohost ipfs.io --silent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be $ ipfs-cohost add ipfs.io --silent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be both ways, but yes, will update!
| CumulativeSize: 6591536 | ||
| $ ipfs-cohost sync | ||
| 🔌 Using local ipfs daemon via http api | ||
| ✔ Snapshots synced! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice to know which sites were updated!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can do!
| ```console | ||
| $ ipfs-cohost prune [n] | ||
| 🔌 Using local ipfs daemon via http api | ||
| ✔ Cohosted websites cleaned! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice to know what was pruned and how much space was recovered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice to know what was pruned
Can be added!
how much space was recovered
Unless someone calls ipfs repo gc, no space is, in fact, recovered, but that's not the job of ipfs-cohost. We should not repo gc something without user consent I believe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, and actually if you're keeping many snapshots of the same data there's going to be a bunch of de-duping going on anyway making it really difficult to know how much space has been made available for gc anyway.
| --no-pin, Find the cumlative sizes but dont pin them | ||
| --silent, -s Just do your job | ||
| --full Fully cohost a website | ||
| --lazy Lazily cohost a website (default) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is lazy the default? Most websites aren't super big so personally I'd be ok with co-hosting them in full. Also, we want to encourage cohosting right? For people who don't take the time to read the help they'll just ipfs-cohost add wikipedia and bam they think they're hosting the whole thing when actually they're not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have an issue about that: ipfs-shipyard/cohosting#6, please take a look.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I wouldn't define the default for this in the spec. There's going to be different use cases and situations for lazy vs full. I think for a command line utility where I'm explicitly adding sites I want to cohost the default should be full, since actually lazily loading a site would require me to use a different application to explore it and start to lazily load bits of it. ipfs-cohost add wikipedia is effectively useless unless I then fire up my browser, load up my gateway, navigate to the site I just decided to cohost, and then start navigating around.
I totally get that in companion you'd want to set this to lazy by default, because it transparently redirects IPFS sites and it's likely to be more of an ambient setting than an explicit COHOST THIS NOW type of action.
cc @lidel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for a command line utility where I'm explicitly adding sites I want to cohost the default should be full [..]
I agree the CLI expectation is "full", but English Wikipedia is 650GB and we have no API to track progress of refs -r once add --full is executed :(
People may not understand why add takes days to complete.
Some ideas:
- (A) default CLI to "full", but inform user about the size of added website, giving an option to cancel before fetch starts
- (B) default CLI to "full", but fail if added website is more than X Megabytes and ask user to pass explicit
--fullto indicate they are ok with the cost - (C) return an error on
addwithout explicit--lazyor--full(require explicit decision)
I think I like (B) the most, but perhaps there is a simpler way to address this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree that --full may become the default but we should warn if it's bigger than X. So B for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
B sounds great!
| spinner.stop() | ||
| log(`📍 Pinned ${res.domain}`) | ||
| const newPath = `${path}/${getTimestamp()}` | ||
| await ipfs.files.cp([cid, newPath]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand how lazy works? If cid is the CID of a directory (which presumably it is for a website) then ipfs.files.cp will copy the directory and it's contents right? How does it know when I've visited pages on a website, when are they added to MFS?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alanshaw if you don't have the contents on your repo, cp will just put a reference on MFS. Once you start navigating the website, you'll start fetching blocks and they'll be available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you cp something you don't have to MFS, it is not recursively fetched. That's why we call refs when the user wants to fully cohost something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha - did not realise nodes were not recursively fetched.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
N.b. you don't need to wrap the ipfs.files.cp args in an array, instead do await ipfs.files.cp(cid, newPath)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've opened a PR against the docs to clarify this as I didn't realise it myself. ipfs-inactive/interface-js-ipfs-core#550
| if (opts.fetchInBackground) { | ||
| ipfs.refs(cid, { recursive: true }) | ||
| } else { | ||
| await ipfs.refs(cid, { recursive: true }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this will cause your node to fetch all the contents...but it's not pinned and not in MFS and will be garbage collected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's cped to MFS just a little above your previous comment :P
--no-pinremoved since we do not create pins anymore.add,ls,rm,syncandgcadded.addfor backwards compatibility.Also, exports utility functions that can be imported somewhere else such as
syncthat might be used on IPFS Desktop.Closes #1
License: MIT
Signed-off-by: Henrique Dias hacdias@gmail.com