Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with list-table #147

Closed
asmeurer opened this issue May 26, 2020 · 4 comments
Closed

Issue with list-table #147

asmeurer opened this issue May 26, 2020 · 4 comments
Labels
bug Something isn't working

Comments

@asmeurer
Copy link
Contributor

I am trying to convert my site asmeurer/brown-water-python#2 from recommonmark to Myst.

One issue I'm having so far is with a list-table.

You can see how the table renders with recommonmark here, which is how I like it. Myst renders it like this.

You can see how I modified it for Myst here asmeurer/brown-water-python@1395106.

There are two problems

  • The table proportions are off. It decided to make the left and right columns small and the middle column wide, instead of equally wide.
  • The rst link in the last row doesn't work.
@asmeurer asmeurer added the bug Something isn't working label May 26, 2020
@chrisjsewell
Copy link
Member

chrisjsewell commented May 26, 2020

The rst link in the last row doesn't work.

That's because it should be written in Markdown. Another thing that should be stressed in the documentation, is that all "nested" content in directives will be parsed as markdown, not rST.
i.e. `garbage in, garbage out <https://en.wikipedia.org/wiki/Garbage_in,_garbage_out>`_, should be [garbage in, garbage out](https://en.wikipedia.org/wiki/Garbage_in,_garbage_out). This is IMO the major advantage over recommonmark; no switching between rST and Markdown.

@chrisjsewell
Copy link
Member

chrisjsewell commented May 26, 2020

Once you fix the link, I think they will look exactly the same:

MyST

Regular expressions

tokenize

ast

Can work with incomplete or invalid Python.

Can work with incomplete or invalid Python, though you may need to watch for ERRORTOKEN and exceptions.

Requires syntactically valid Python (with a few minor exceptions).

Regular expressions can be difficult to write correctly and maintain.

Token types are easy to detect. Larger patterns must be amalgamated from the tokens. Some tokens mean different things in different contexts.

AST has high-level abstractions such as ast.walk and NodeTransformer that make visiting and transforming nodes easy, even in complicated ways.

Regular expressions work directly on the source code, so it is trivial to do lossless source code transformations with them.

Lossless source code transformations are possible with tokenize, as all the whitespace can be inferred from the TokenInfo tuples. However, it can often be tricky to do in practice, as it requires manually accounting for column offsets.

Lossless source code transformations are impossible with ast, as it completely drops whitespace, redundant parentheses, and comments (among other things).

Impossible to detect edge cases in all circumstances, such as code that actually is inside of a string.

Edge cases can be avoided. Differentiates between actual code and code inside a comment or string. Can still be fooled by invalid Python (though this can often be considered a garbage in, garbage out scenario).

Edge cases can be avoided effortlessly, as only valid Python can even be parsed, and each node class represents that syntactic construct exactly.

Recommonmark

Regular expressions

tokenize

ast

Can work with incomplete or invalid Python.

Can work with incomplete or invalid Python, though you may need to watch for ERRORTOKEN and exceptions.

Requires syntactically valid Python (with a few minor exceptions).

Regular expressions can be difficult to write correctly and maintain.

Token types are easy to detect. Larger patterns must be amalgamated from the tokens. Some tokens mean different things in different contexts.

AST has high-level abstractions such as ast.walk and NodeTransformer that make visiting and transforming nodes easy, even in complicated ways.

Regular expressions work directly on the source code, so it is trivial to do lossless source code transformations with them.

Lossless source code transformations are possible with tokenize, as all the whitespace can be inferred from the TokenInfo tuples. However, it can often be tricky to do in practice, as it requires manually accounting for column offsets.

Lossless source code transformations are impossible with ast, as it completely drops whitespace, redundant parentheses, and comments (among other things).

Impossible to detect edge cases in all circumstances, such as code that actually is inside of a string.

Edge cases can be avoided. Differentiates between actual code and code inside a comment or string. Can still be fooled by invalid Python (though this can often be considered a garbage in, garbage out scenario).

Edge cases can be avoided effortlessly, as only valid Python can even be parsed, and each node class represents that syntactic construct exactly.

@asmeurer
Copy link
Contributor Author

Ah gotcha, the code from the invalid link was messing up the table width.

@chrisjsewell
Copy link
Member

I believe this is now fixed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants