-
-
Notifications
You must be signed in to change notification settings - Fork 608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move doc sections to "guide" + "reference" #2115
Conversation
Once the build has completed, you can preview any updated documentation at this URL: https://fluxml.ai/Flux.jl/previews/PR2115/ in ~20 minutes |
This comment was marked as off-topic.
This comment was marked as off-topic.
214d8c0
to
52f9a54
Compare
52f9a54
to
a5bc162
Compare
c2a8bf0
to
917b9ad
Compare
Is there really no way to selectively collapse larger sidebar sections in Documenter by default? I feel like that would make the high-level order much less important and save us some decision fatigue. |
I looked at a few packages linked from Documenter's docs, and don't see any such thing. But I don't think it's too bad really, you don't have to scroll so far that learning how to use some folding thing would pay. I wonder today if we should make Ecosystem a top-level heading, with a page for packages containing Flux models, separate from a page of less closely associated links. I also wonder if we want a page of CUDA docstrings, separate from the current GPU page. For now, https://fluxml.ai/Flux.jl/previews/PR2115/CUDA/ We should probably hide several of these tutorials, at least until they can be updated, but ideally improved. That will also cut the scrolling. And argues for them being last. |
3c850e7
to
8ee5428
Compare
docs/make.jl
Outdated
"Building Models" => [ | ||
"Built-in Layers 📚" => "models/layers.md", | ||
"Training" => "training/training.md", | ||
# "Regularisation" => "models/regularisation.md", # consolidated in #2114 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR started out just re-organising sections.
This commented line should probably stay until #2114
|
||
--- | ||
|
||
## Alternatives to Flux |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small changes to the links page. The biggest is this section on SimpleChains / KNet / Lux. See what you think?
|
||
## Installation | ||
|
||
Download [Julia 1.6](https://julialang.org/downloads/) or later, preferably the current stable release. You can add Flux using Julia's package manager, by typing `] add Flux` in the Julia prompt. | ||
|
||
This will automatically install several other packages, including [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) which supports Nvidia GPUs. To directly access some of its functionality, you may want to add `] add CUDA` too. The page on [GPU support](gpu.md) has more details. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to delete lots of text from the welcome page. It was pretty verbose.
@@ -232,6 +232,5 @@ Affine(3 => 1, bias=false, init=ones) |> gpu | |||
``` | |||
|
|||
```@docs | |||
Functors.@functor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This macro has its (long) docstring in the Functors page, so it need not appear here (and in fact Documenter complains about it appearing twice)
b4418ee
to
4716bd5
Compare
… saves space on the navigation panel so you can see more other things
4716bd5
to
a0d90b8
Compare
docs/src/ecosystem.md
Outdated
|
||
* [SimpleChains.jl](https://github.com/PumasAI/SimpleChains.jl) is focused on making small, simple, CPU-based, neural networks fast. Uses [LoopVectorization.jl](https://github.com/JuliaSIMD/LoopVectorization.jl). (Was `FastChain` in DiffEqFlux.jl) | ||
|
||
* [Knet.jl](https://github.com/denizyuret/Knet.jl) is a neural network library built around [AutoGrad.jl](https://github.com/denizyuret/AutoGrad.jl), with beautiful documentation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* [Knet.jl](https://github.com/denizyuret/Knet.jl) is a neural network library built around [AutoGrad.jl](https://github.com/denizyuret/AutoGrad.jl), with beautiful documentation. | |
* [Knet.jl](https://github.com/denizyuret/Knet.jl) is a neural network library built around [AutoGrad.jl](https://github.com/denizyuret/AutoGrad.jl). |
Is "with beautiful documentation" from a tagline somewhere? Trying to think of a better point of comparison for this bullet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is really nice, https://denizyuret.github.io/Knet.jl/latest/mlp/
But I agree this may read as a dismissal. The entire section is a bit of a minefield...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to find a way to express that KNet takes an equally valid but different approach to the ML library design problem. The README and https://denizyuret.github.io/Knet.jl/latest/tutorial/#Philosophy don't give us much in the way of talking points to succinctly contrast against Flux, however.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this seems aimed more at tensorflow et al, parallel to pytorch / Flux / etc.
docs/make.jl
Outdated
"NNlib.jl (`softmax`, `conv`, ...)" => "models/nnlib.md", | ||
"Zygote.jl (`gradient`, ...)" => "training/zygote.md", | ||
"MLUtils.jl (`DataLoader`, ...)" => "data/mlutils.md", | ||
"Functors.jl (`fmap`, ...)" => "models/functors.md", | ||
"OneHotArrays.jl (`onehot`, ...)" => "data/onehot.md", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having example functions right in the heading feels a little strange. Could we instead use adapt the tagline of each page? e.g. "Neural Network primitives - NNlib.jl".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't love them, but they are compact, and I didn't manage to think of a better way (last time around). I mean I know what "primitives" means but it's not so obvious... what kind of utils is MLUtils, what on earth is Functors in less than a paragraph?
(They aren't new, and the page titles are longer explanations, which don't fit nicely in the sidebar)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thought was to take inspiration from the shortened titles, something like so:
"NNlib.jl (`softmax`, `conv`, ...)" => "models/nnlib.md", | |
"Zygote.jl (`gradient`, ...)" => "training/zygote.md", | |
"MLUtils.jl (`DataLoader`, ...)" => "data/mlutils.md", | |
"Functors.jl (`fmap`, ...)" => "models/functors.md", | |
"OneHotArrays.jl (`onehot`, ...)" => "data/onehot.md", | |
"Low-level Operations - NNlib.jl" => "models/nnlib.md", | |
"Taking Gradients - Zygote.jl" => "training/zygote.md", | |
"Working with Data - MLUtils.jl" => "data/mlutils.md", | |
"Manipulating Models - Functors.jl" => "models/functors.md", | |
"One-Hot Encoding - OneHotArrays.jl" => "data/onehot.md", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we do this, then refine as necc?
Co-authored-by: Brian Chen <ToucheSir@users.noreply.github.com>
This is one take on how we might aim to organise the docs. No new text, just moving headings around.
The goal is to separate it into
This seems like a non-awful way to present things, which can be achieved without re-writing more than we have to. The sections which most awkwardly straddle guide/reference are the training/optimisation/regularisation ones, which need re-working for explicit params anyway. #2114 is a start.
One downside of this is that some things like
onehot
andoutputsize
don't really appear in the "guide" half at all. Maybe they ought to be squeezed in somewhere?Edit, more comments:
Maybe "Fitting a Line" and "Quickstart" also straddle the guide/tutorial boundary a bit. Could think about moving them there.Maybe I like the choice of fast/slow intro. I see now that this was what the original overview PR envisaged: #1579 (comment) (naming exactly xor as what the quickstart should target!)Maybe the guide pages should be consolidated a bit -- "performance tips" doesn't say much, can it be rolled into "GPU & performance tips"? "Regularisation" also doesn't say much ... #2114 wants to make it just a section.
Not sure what order these sections ought to be in. One complaint in #2105 is "Ecosystem... is currently way down at the bottom". This could move up, although we can't put everything first... what exactly deserves to go last?
Do tutorials come after reference? Perhaps #2125 was merged in haste, as these tutorials aren't actually much good... maybe one or two should be updated & the rest trashed. The versions on the model zoo have stayed more up to date, maybe people should just read those -- the added text on these versions is often more confusing than helpful.