"JavaScript" is a hard target to pin down precisely what would be the best way to learn it, without first learning the history of the language, as an evolution of ECMA Script; see ECMAScript Language Specification and ECMAScript Internationalization API Specification for the most "source of truth" as it gets for JavaScript (even if a cumbersome point of reference to get up and running with a project quickly, they are important to be aware of).
As from the start the JavaScript ecosystem was fragmented across various browsers, there is no single source of truth for how a "JavaScript Engine" enacts the ECMAScript language spec, so it's worth knowing of the major engines (with there being a distinction between the JavaScript Engine and the Browser Engine).
- ChakraCore is an evolution of the Microsoft IE JS Engine, Chakra.
- JavaScriptCore (a part of WebKit) (opensource.apple) (or on github)
- SpiderMonkey (firefox-source-docs) (mozilla-central/source) (Gecko on github)
- V8 (googlesource) v8.dev
The three "server side runtimes" are Node (V8), Deno (V8), or the recent and experimental Bun (JavaScriptCore). Bun's package management is "npm compatible", so if our goal is to create a JavaScript package, we need to focus on Node and Deno.
- Node packages can be hosted on npm (see Getting started), and are manually
npm publish
'd. - Deno packages can be seen on Deno Third Party Modules (see Adding a module), and are cached from public GitHub repos.
- We also want to target the github hosted npm registry (also see Publishing Node.js packages).
JavaScript (at least, as it was first created by Brendan Eich in 1995) has gone through a long history of corporate holdouts on standardising its adoption (leading to its early fragmented sources of "truth") until 2009 when all the major implementations conformed, and agreed to adopt a shared standard, from ECMAScript 5 and onwards. The early fragmented nature of JavaScript has lead to subsequent releases of languages that are strict supersets of JavaScript, and transpile to JavaScript. By far the most common of these is Microsoft's TypeScript, which adds static typing amongst other features. Others worth noting are;
- PureScript: Functional programming language.
- AssemblyScript: TypeScript for WebAssembly.
- CoffeeScript.
Although this implementation is supposed to be the "JavaScript" implementation, for reasons discussed much further down, it will more honestly be a "TypeScript" implementation that will be transpiled into two different types of JavaScript, targetting the split in module systems between NodeJS's original CommonJS module system, and the subsequent standardised ECMAScript Modules.
One of the top considerations for starting to learn javascript, with the end goal of its utility by node, is the dichotomy of the two "module formats" that node can be used with. Prior to "ECMAScript modules" being introduced around 2015 as the standard module format that browsers adopted, NodeJS was built around its own module system "CommonJS modules" which was the only module system supported by Node until version 12, where ECMAScript modules were slowly introduced until being fully supported by version 13/14 (from this reflectoring blog and this logrocket blog). For a deep dive on the loading order of ECMAScript modules, see this mozilla "hacks" blogs.
At the time of writing this, npm
already has a collatz
package (from this repo), and deno.land/x
has collatz_wasm (which is also on npm) (from this repo). For github hosted packages, we'll look at this search. Oddly enough, at the time of writing this, I was getting ready to say there is no npm
package with the name collatz
in the github npm
registry, as it was not appearing without being logged in, but when I searched again from a browser I was logged in with, this package appeared.
A good resource for starting to learn "JavaScript" would be mozilla's developer docs on JavaScript (or guide).
Two links recommended by the GitHub "Working with the npm registry" page include Creating a package.json file and Creating Node.js modules.
We'll need to start with signing up to npm. I made my profile. Something that immediately stands out is that the one time password email includes links for Configuring two-factor authentication and Creating and viewing access tokens, as 2FA
is required for logging in; although an "Automation" token can be made for publishing packages without a 2FA token, it can only be made from the website, not from the CLI. So we'll do that now. There's also an option to "Linked Accounts & Recovery Option" with a GitHub account, which seems worthwhile trying, if it didn't request the egregious permission of "Act on your behalf". Why does it want that permission?
An important concept for npm is scope, both public and private. I hadn't realised before that npm included this feature. But because it is included we can focus on creating a public scoped package, because there already exists an unscoped collatz package on npm. Starting off, our node --version
is v17.6.0
, and our npm -v
is 8.5.5
. We can create the initial package.json
file with default values (--yes
) within a scope (--scope=@scope-name
) by running npm init --scope=@skenvy --yes
. Because it'll be initialised with a scope, we'll have to publish it with npm publish --access=public
, as scoped packages default to private.
The package license should be an OSI approved identifier from the SPDX License List. Although I'm trying to add as many of the optional data fields in the package json as possible, rather than use "files": ["*"],
we'll use an .npmignore file, which can be tested with npm pack
to generate the tarball locally. Although, to "use" the .npmignore
, we don't actually need to add one, unless its contents would differ from the .gitignore
that's already present, as outlined via "If there is a .gitignore
file, and .npmignore
is missing, .gitignore
's contents will be used instead." As a moment of learning, this being the first time I'm setting up a node package, I find it noteworthy that there is a "browser" option, which is mutually exclusive with "main" as I'd assumed node modules weren't supposed to target the client side! Other note worthy options are bin
and man
for packaging scripts that should be symlinked to a bin folder, and for requesting the man docs. Following this we have a whole guide dedicated to the "scripts" dictionary entry. Following the scripts and config, are several different ways to define dependencies (including "overrides"
to lock sub-dependencies), there are the "engines"
, "os"
, and "cpu"
options (and the "private"
and "publishConfig"
to prevent accidentally pushing/publishing).
Another important concept is the difference between a "package" and a "module". Even though this wont impact how we go about creating a package, it's important to understand why "almost" all packages can be modules but not all modules are packages. Although the page linked to has additional caveat that describe additional ways to procure packages, the central defining factor of a package is "A package is a file or directory that is described by a package.json
file." We saw above that there are two primary types of packages; those that define a "main"
, and those that define a "browser"
option. The ""almost" all packages can be modules" is discluding those that define a "browser"
option. As for modules; "A module is any file or directory in the node_modules
directory that can be loaded by the Node.js require()
function." Things that satisfy this are packages that define a "main"
, and other javascript files that aren't necessarily stored in packages.
This is the point at which we'd follow Creating Node.js modules. Although it is a nice anchor to other docs that are adjacent to it, it doesn't do much for recommending a good project setup besides saying "have a package.json
file with a "name"
and "version"
, and to include a console.log
line in "the file". Whilst at this point I'm aware that "the file" is the "main"
file, and even though two pages that precede it (About packages and modules and Creating a package.json file) would include steps that would inform the reader of that, it's surprising that the "example" Node package creation at Creating Node.js modules is not a shade more verbose about that. I can definitely see someone jumping straight to this page and potentially being confused. There really should be a "here's all the things you need for a minimum working package". Even if technically all you need for a minimum working package is a "name"
and "version"
in a package.json
file, it's a fair assumption that there would be a single page guide for slapping together a package.json
file AND the index.js
file (or whatever else "main"
is).
I'd also have figured there'd be a more robust testing example besides what it currently says, which is to publish the example console log line package to npm, then swap to another directory and install it, then add a test.js
file that requires the module and runs it. Even if there isn't a recommended test suite module, that honestly sounds like so large an anti-pattern I'm not convinced it wasn't an off-season April Fools joke. Not solely the use of publishing to test, as there's certainly room to publish an early version to confirm the process, but the part that seems grossly objectionable is that reading only that and not reading further could leave the impression that the order of the process for testing requires a package to be published, before it can then be installed and required and tested. Would the remedy if it then didn't pass that testing phase to yank that version..?
There's a few guides that might be more robut for setting up a good node package. For instance, theres this snyk blog (which addresses this example repo), a freecodecamp blog (which is entertaining if not at least for the demonstration of publishing the absolute minimum of a package) (and also links to an interesting aggregating site packagephobia), and possibly look at ESLint (which may include looking at Airbnb's .eslintrc, because the Airbnb JavaScript Style Guide is supposedly noteworthy).
Before jumping into the snyk blog, lets look more at targetting the github hosted npm registry (and Publishing Node.js packages). While trying to set up the workflows, I initially added the optional caching in the setup-node
action but ended up getting an error of "Error: Cache folder path is retrieved for npm but doesn't exist on disk: /home/runner/.npm" in the post-step. Issues #317 and #479 appear to demonstrate that the action is generally hostile to caching.
Per the "two module systems" that can exist in node, to yield a resulting package that can support both "CommonJS modules" (*.cjs
files) AND "ECMAScript modules" (*.mjs
files) (with *.js
files being treated as the module specified in the package.json
, with CommonJS being default, and ECMAScript being if "type": "module"
is included). It's worth noting that V8, the engine Node runs on, recommends ECMAScript modules. The approach recommended by the snyk blog is to write a TypeScript module, which is then transpiled according to tsconfig
files and includes a prepack
script which tsc
compiles the TypeScript into BOTH *.cjs
AND *.mjs
formats.
This means that instead of writing JavaScript for this "JavaScript implementation", we'll be writing TypeScript instead.
The first part of setting up a TypeScript build environment for multiple build targets, is taking advantage of the extensible tsconfig.json
(reference) extends option, to have a shared config base and diverging config for targetting CommonJS builds and ECMAScript builds. We'll also be replacing the "main"
field in our package.json
with an "exports"
field. Although "exports"
can be an "alternative" to "main"
, we'll set the "main"
(and a new "types"
field) to target the CommonJS build by default. We'll then copy the "scripts"
suggested by snyk for now, as they seem reasonable. One thing it appears the snyk blog glossed over is actually running npm install typescript --save-dev
to install TypeScript as a "devDependencies"
, so we'll do that now. If we got to this stage we should be able to now npm run build
(or our preferred make build
which runs npm pack
which runs npm run prepack
which runs npm run build
).
JavaScript still feels alien to set up compared to the other languages, that it's been a few months hiatus since I started on this.
Coming back to it to continue on it, the next step in the snyk blog we were following is to add a testing framework. I have previously made changes to mocha
tests without a greater context for it, but mocha
as a test runner, chai
as an assertions library, and ts-node
to facilitate their operation on TypeScript files rather than plain JavaScript. To acquire these three we'll run npm i -D mocha @types/mocha chai @types/chai ts-node
to install as developer dependencies. We've also got to include a ~/.mocharc.json
to instruct mocha
how to run, and a ~/tests/index.spec.ts
to provide the tests to run.
As an intermediate and silly note here, as the scope of this JavaScript implementation, that was originally planned to fit the repository wide expectation of having 3 emojis to symbolise it used 🟨🟩🟥 for the yellow square of JS, the green of Node, and the red square of npm
, has increases in scope to now rely on TypeScript to transpile to JavaScript, and TypeScript also uses a square logo that is a primary colour, this JavaScript implementation will buck the trend of other implementations and use 4 emojis, 🟨🟦🟩🟥, for JS, TS, Node, and npm.
The next step in the snyk blog in CI testing, utilising the mocha
setup that had been added in the previous step. We'd however already set up exhaustive CI that was using a test step that was echoing a string. Now that the test step actually uses mocha
, the CI now also does that already. There is a following section on package testing, but as in all other instances we rely on unit testing, we'll forego package testing for now. Following that is a section on using the a snyk custom action for security scanning. It feels somehow rude to use their thoroughly well done guide on how to set up a project and not use their tool at the end of it, but it requires signing up and acquiring an API key, and we're already using GitHub's integrated CodeQL (although snyk's tool might possibly work on a wider variety of languages). Then there is a section on why it's better to automatically bump versions, and explaining semantic versioning. We're specifically intentionally handling the versioning manually. The guide is presenting this in a way that any change merged to the main branch, that runs a publishing CICD workflow, should automate its version and publishing, but we use the manual change of the version, or lack thereof, to manually gate the publishing, if a change does or does not live up to requiring a new package release.
With that all out of the way, we've gotten all the way through implementing what's necessary for us out of the snyk blog, Best practices for creating a modern npm package. As this marks the end of setting up the core configuration for testing, transpiling, and publishing, we'll release this as a new minor version before we continue on to actually implementing the code.
Having started the implementation, an error has popped up, which appears to be different between my local and what is running in CI, because the CI is using node -v
of v12.22.12
and I'm using v17.6.0
(npm -v
of 8.5.5
), so use n
to swap to an older version. A quick npm list
yields the warning
npm WARN read-shrinkwrap This version of npm is compatible with lockfileVersion@1, but package-lock.json was generated for lockfileVersion@2. I'll try to do my best with it!
So we've got to regenerate the package-lock.json
with a new npm install
, followed by an npm ci
to see what it spits out as a warning / error. The npm ci
command is not erroring, but the npm list
command is now unhappy for a different reason;
├── UNMET PEER DEPENDENCY @types/node@*
npm ERR! peer dep missing: @types/node@*, required by ts-node@10.9.1
There are some articles that mention npm at this version does not automatically install peer dependencies, so we'll need to npm i -D @types/node
. This gets rid of the error message of the missing peer dependency on the npm list
, but does not alleviate the issue of mocha
complaining about.
TSError: ⨯ Unable to compile TypeScript:
With an example error such as
tests/index.spec.ts:30:22 - error TS2737: BigInt literals are not available when targeting lower than ES2020.
Before we dig into how mocha
is choosing which tsconfig.*.json
file to use to know what target to transpile for, in such a way as to believe that it isn't being told to "target": "ES2020"
(when both tsconfig.cjs.json
and tsconfig.esm.json
are set to "target": "ES2020"
), it's worth addressing that we originally chose version 12 as the base dependency version as it was the version that introduced ECMAScript support. Yet, there is a Recommended Node TSConfig settings page which recommends node version 14 as the first version with which to use "target": "ES2020"
. At this stage, still have relatively no idea what I'm doing in a JavaScript, let alone a TypeScript, environment, I find it bizarre that an often spruiked feature of TypeScript is its ability to simplify JavaScript development through its supposed ability to transpile any TypeScript you write into any JavaScript target, yet through this process so far, in an attempt to use JavaScript's BigInt
, (see Mozilla, V8, and V8 blog) we've had to upgrade from "target": "ES6"
(circa 2015, currently the recommended year to target up to) to "target": "ES2020"
, but attempting to figure out now why mocha
is unhappy, I've stumbled across not only a list of what is apparently a reasonable target per Node version, but also a whole collection of different tsconfig's to use for different use cases. Reading into these at a surface level, I feel like the use of TypeScript has not decreased the amount of contextual awareness required to understand what it being developed, but a lot more. Part of that is likely the learning curve, but at this stage it feels incredibly counter intuitive.
For now, we'll take the recommendation of saying the minimum supported node version is 14, not 12. This appears to have solved the issue with mocha
not being able to automatically know to transpile to a target it thought would accept bigints.
But it leaves me with the question of, knowing what features you want in your JavaScript, let alone what features you want in your TypeScript, how do you use that to determine what Node version (let alone what TypeScript target..) you'll require. Which sounds like an unreasonable thing for someone to be concerned about, i.e. more generally, knowing what you want to use, how do you determine what version of the thing you'll need, would be something you'd expect someone trying to build with something to be able to determine. I am simply struck at what feels like the use of TypeScript being antithetical to requiring less of me, when I'm not even sure why I've picked the configuration I have other than some TypeScript repository that is community maintained vaguely suggests it as the most appropriate to use, without any rhyme or reason as to how it determined that appropriateness -- and to understand that and feel empowered to properly weild TypeScript, I'd have to understand its internal machincations. To close off this choice of using Node version 14, which comes with npm version 6, we'll also add an "engines" setting in our package file, and add an .npmrc
file so that we can set the engine-strict
config there.
At this stage of having the first "Function" present in the TypeScript, we should consider investing in a linter. Using a combination of the code snippets from the java, julia, and python implementations and gluing them together here has probably led already to some gross stylisation that would require eye-bleach after looking at it. Previously it was mentioned;
and possibly look at ESLint (which may include looking at Airbnb's .eslintrc, because the Airbnb JavaScript Style Guide is supposedly noteworthy).
Along with this, there is also a TSLint (archived repo), although per their roadmap and blog, in an effort to make TypeScript and JavaScript development more cohesive, they now recommend typescript-eslint which is ESLint running on TypeScript code, such that ESLint is the "standard" linter. So I guess that's likely the best choice.
Install typescript-eslint
To start with we'll look at typescript-eslint's "Getting Started", which begins with the developer dependency installation npm install --save-dev @typescript-eslint/parser @typescript-eslint/eslint-plugin eslint typescript
; which immediately pops out an error of;
npm ERR! notsup Required: {"node":"^12.22.0 || ^14.17.0 || >=16.0.0"}
So I guess we've gotta take the intersection of that and the current requirement of being >=14.0.0
to limit to engines in {"node":"^14.17.0 || >=16.0.0"}
. Aftering a quick n 4.17.0
the installation works fine. Before seeing if we want to use Airbnb's .eslintrc, we'll try out the recommended config from typescript-eslint's "Getting Started".
First running the npx eslint .
that is suggested yields an error
/mnt/c/Workspaces/GitHub_Skenvy/Collatz/javascript/.eslintrc.js
1:1 error 'module' is not defined no-undef
Google reveals that this issue likely comes from not having told ESLint via its rc that we want to run this in a node environement, and must add env: {"node": true}
. This does get rid of that warning. I imagine for the popularity of ESLint as the recommended tool, it's surprising the stackoverflow question to this doesn't have more traffic.
Secondary to the .eslintrc.*
file, we can also add a .eslintignore
to prevent files we don't want to lint from being included. This would include the ./node_modules
folder as well as the folder our transpiled JavaScript result goes into.
I was temporarily confused as I also tried to run eslint .
without npx
as some sites offer in snippets, and it was not working. I was relying on the assumption that because my devDependency of TypeScript allows me to use tsc
in the scripts
that can be used with npm run ...
I should be able to also use ESLint in a similar manner, but was only trying to do so in my terminal rather than adding it to my scripts
. It took a while of googling around to stumble on the simple answer that ./node_modules/.bin
, which contains these invocable scripts, is added to the PATH
when invoking npm run ...
. Sure enough, tsc
which has been working for while in my npm run ...
's also does not work as "just" tsc
outside of npm run ...
. So we can easily add an npm run ...
that will use the version installed by the package lock. So we can now simply use an npm run lint
.
As we've followed the instructions up until here, we'll swap to the recommendations of this blog, as it uses json for the .eslintrc
, and provides an explanation for adding "rules" to it. We can also use this to try and add Airbnb's .eslintrc.
We'll use npx install-peerdeps --dev eslint-config-airbnb
, which generates the "peerDeps" installation command npm install eslint-config-airbnb@19.0.4 eslint@^8.2.0 eslint-plugin-import@^2.25.3 eslint-plugin-jsx-a11y@^6.5.1 eslint-plugin-react@^7.28.0 eslint-plugin-react-hooks@^4.3.0 --save-dev
. We also need to add "airbnb"
to the "extends"
of the .eslintrc
.
Well, now we've got some linting set up, and using an extensive collection of recommended rules, it's time to run it and iteratively see what I've done differently from what the Airbnb's .eslintrc recommends. These differences can be added as modifications to the sets of rules that are extended in the ./.eslintrc
through a "rules"
map. One that I will definitely allow is object-curly-newline
. Although no-underscore-dangle
is something that I've avoided changing where possible, I have removed the "private" underscores from some previous implementations.
One that is was a slight problem was a bunch of lines in my ./tests/index.spec.ts
popping up with;
23:3 error 'it' is not defined no-undef
ESLint Environments 2: Electric Boogaloo
The no-undef
error appears to be complaining that the it
inside the function blocks of a describe
(as well as the describe
in other linter errors) are not declared and or defined. An answer on this stackoverflow question provides the context and a link to some ESLint docs on Specifying Environments with the context being that there are many environments, and;
An environment provides predefined global variables.
Which means we can add a /* eslint-env mocha */
comment at the top of our ./tests/index.spec.ts
and get rid of these errors.
I'm still routinely getting the errors;
6:26 error Unable to resolve path to module '../src/index' import/no-unresolved
6:26 error Missing file extension for "../src/index" import/extensions
7:27 error Unable to resolve path to module '../src/index' import/no-unresolved
7:27 error Missing file extension for "../src/index" import/extensions
The import/no-unresolved
error can be avoided by adding this to the .eslintrc
.
"settings": {
"import/resolver": {
"node": {
"extensions": [".js", ".jsx", ".ts", ".tsx"]
}
}
}
Followed by setting import/extensions
to a warning.
Now that there is some relevant code in here, as TS being consistently transpiled to JS, tested with mocha and linted with eslint, it's time we look at adding the last main "feature" of all (or most) of the implementations; documentation comments and tools that compile generated documentation as webpages that we can add to our GitHub pages -- similar to our GoDoc, JavaDoc, Documenter.jl, Roxygen+Pkgdown, and RDoc sites.
A long standing project to standardise JS documentation comments is JSDoc. TypeScript has a similarly named TSDoc. Whilst it would be interesting to serve documentation generated for both the transpiled JS code as well as the source TS code, as we are directly controllowing the source (although from what I've seen the comments are 1-to-1 copied to the transpiled output) our target will be to write TSDoc styled comments. Although without testing either out, purely from reading their descriptions, JSDoc actually generates docs pages, but TSDoc simply specifies a recommended format to be consumed by other tools that will use the TSDoc format to do the docs generation. So while JSDoc just "does the whole thing", if we want to pick TSDoc as a "standard" to document the source, being TS, we'll still need to pick one of the tools that will read the TSDoc comments to generate the docs pages. Besides both of these, there also appears to be an ESDoc, which claims to be a JS documenter.
The TSDoc site links to several "popular tools" that use TSDoc comments, one of which is eslint, and another vs code, but the first tool mentioned is TypeDoc (repo), a;
Documentation generator for TypeScript projects.
So our primary goal is to write TSDoc styled comments, and use TypeDoc to compile them into the generated documentation.
Further down the page, rather visually hidden, is mention of, and not a link to, an eslint-plugin-tsdoc
(repo, though), an eslint plugin to, I assume, lint the TSDoc comments. That repository has not been touched in a few years, but it appears from visitng the npm page for eslint-plugin-tsdoc, which links back to the Microsoft TSDoc repo, that it is indeed a monorepo, which contains as a project within it, the most recent state of the plugin eslint-plugin-tsdoc
. So I guess we'll be able to use that!
The two new packages we want to add can be included with npm install --save-dev typedoc eslint-plugin-tsdoc
After adding both of these and following the current ReadMe for eslint-plugin-tsdoc
(adding the plugin and rule), it's not immediately obvious why it isn't flashing up warnings about the documentation styling that currently exists. If we look at what it appears to currently be doing, the first thing it's doing is not linting any comment that is not a block comment; none of my comments are block comments. So to get it to comment on anything we need to add some block comments /*<comment>*/
but we need to style it as /**<comment>*/
.
With comments now in place, we'll just need to keep using these sorts of references.
I tried testing jsdoc, npm install --save-dev jsdoc
, however when I tested it on the ECMAScript version, npx jsdoc lib/esm/index.mjs -d docs/esm
, I got;
There are no input files to process.
And when I tested it on the CommonJS version, npx jsdoc lib/cjs/index.js -d docs/cjs
, it returns;
Do not know how to serialize a BigInt
So JSDoc is already too much of an uphill battle to work with.
We can install a simple server with npm install --save-dev http-server
, run it with npx http-server
, and navigate to http://127.0.0.1:8080/docs/tsdoc.
We're now ready to add once again create an empty orphan branch;
git checkout --orphan gh-pages-javascript
rm .git/index ; git clean -fdx
git commit -m "Initial empty orphan" --allow-empty
git push --set-upstream origin gh-pages-javascript
Apparently, specifying an input in the "destructured" format bound to an interface type, like ({ n, P = 2n, a = 3n, b = 1n }: CollatzParameters)
(where CollatzParameters
is {n:bigint, P?:bigint, a?:bigint, b?:bigint}
), will lead to the name of the input being inferred by TypeDoc as __namedParameters
. Any set of @param
comments on a function that don't address this specific case will be ignored, but not necessarily in the order mentioned by the TypeDoc example, or at least, not in the order I would assume, with the acknowledgement that I suffice the user acceptance testing role of "as a stupid user" per the KISS principle. So it's more probable the TypeDoc examples are fine, I'm just reading something extra that isn't there. So let's experiment a bit.
For example, in this case where there are four @param
's, the output TypeDoc yielded was this, which simply lists __namedParameters
as the name of the only input. Which is where I have now just seen and realised I got my @param
's wrong, for accurately displaying the inputs in TypeDoc's output. So now, how can this be fixed?
Well, testing this locally, there appear to be two ways to correctly yield a different name rather than __namedParameters
. Either a single @param
, for the one object input, or, a single @param
that renames the top level object, along with the following lines obeying the @param Object Literal style, i.e. * @param AnyNameHere - various options
followed by * @param AnyNameHere.abc - property abc
will correctly yield renaming the __namedParameters
to AnyNameHere
, but I cannot get any way of rewriting the params underneath to yield their comments in the resulting site. Perhaps this is because it's not parsing the type of the object literal far enough to see the names of the parameters of the interface, and using the name of the interface instead of the "Object Literal" is a foot-gun? It would appear this is the case, because replacing the name of the interface with the "object literal" version of the interface, {n:bigint, P?:bigint, a?:bigint, b?:bigint}
, allowed TypeDoc to yield the full documentation, as written on the function.
I'm not sure if there's a more preferred style one way or the other whether to use named interfaces or not, it would certainly uphold the notion of minimising the amount of repeated code, or at least repeated comments. But using a named interface rather than an "anonymous interface" ("object literal" as the type parameter) means the doc comments won't appear for each function. I think given these, I'd rather get rid of the named interface and swap over to object literals for each function's singular input's type parameter.
It turns out though, that doing this leads to two issues. Although the docs appear nice in the generated TypeDoc site, they don't appear when hovering over the symbol in the editor, and every @param
is creating an ESLint warning from the tsdoc/syntax
rules;
tsdoc-param-tag-with-invalid-name: The @param block should be followed by a valid parameter name: The identifier cannot non-word characters
So it looks like, the trade off of not having a nice TypeDoc site is a better in-editor experience, and the "TSDoc" way of doing it.
One thing I haven't added yet is code coverage to report on how tested the code is. It'd be nice to have. Jest has coverage itself, but we're already set up with mocha and I have no idea if they are interoperable at all. Googling "mocha coverage" lands us on this SO, which recommends instanbul, but via npm install --save-dev nyc
.
To deploy to deno, we can look at "adding a module". It looks like the collatz
name is still available, so I'll need to add the webhook
to this repo's webhooks. It doesn't mention it on the adding a module page or in the docs, but googling around a few Q&A sites mention that the payload type needs to be application/json
, then to pick to select from individual events, and only select the "branch or tag creation" event. After adding the webhook, you can open it and see recent deliveries, i.e. payloads sent to the webhook and replies. It seems creating the webhook sent a "ping", which resulted in an http400
. Clicking the chevron expands the request that was sent and also shows a tab to see the response payload. For instance, I got back the response
{"success":false,"error":"provided sub directory is not valid as it does not end with a /"}
which sounds easy to remedy, but is something I imagine it would have been able to resolve on its own. I guess we'll go back to the adding a module page, and add in the trailing slash it claims to expect, and get the new webhook to send to;
https://api.deno.land/webhook/gh/collatz?subdir=javascript%2F
Well, that seems to have resolved it; I clicked to redeliver the ping, and got back success;
{"success":true,"data":{"module":"collatz","repository":"Skenvy/Collatz"}}
The deno add_module page stayed in a state where after typing in the module name "collatz" and the subdirectory "javascript/", the "Add the webhook" section rendered the payload url and displayed a rotating circle that said it was "waiting", for quite a while. After giving up waiting for it to do whatever it was doing, I navigated away to find those Q&A answers to what the remaining steps to set it up were. I intermittently checked back on the deno page, for example to change the subdirectoy, all the while it was still "waiting".
Only on the successful ping did the page stop "waiting" -- there was a nice confetti exploding on the screen effect, but that then gave way to the steps that I had to go looking for elsewhere to finally appear on that page.. after I'd already done them..? So, you need to use a valid webhook in the correct way, for the page to dynamically rerender to show the steps you need to know to set up the webhook?
Well, at least the webhook worked, and it should deploy to deno on the next tag. But there weren't any options to configure what tags would trigger the webhook, so deno deployments will have the opposite problem to go deployments. Go wouldn't accept any tag that didn't exactly match its semver regex, but deno will deploy on every tag, most of which wont be relevant to it?
NVM, "Node version manager", is an extremely useful tool for managing multiple versions of node installed at the same time. Coming back in here to say that, while updating the supported engines to remove 14 and 16 and add 20, that I'm surprised I hadn't mentioned nvm in here before.
Yet again, a tumble down a rabbit-worm-hole has transpired from very little shaking the node tree. A dependabot PR to suggest upgrading chai
from v4
to v5
led to quite a while trying to investigate why it was difficult to run mocha
tests with it, to learn that chai v5 dropped cjs support, and that mocha's support for esm is still expiremental [2:GH] [3:SO]. Also see this SO RE 'TypeError [ERR_UNKNOWN_FILE_EXTENSION]: Unknown file extension ".ts"'. The rabbit-worm-hole also involved trying to figure out if our use of nyc
had any influence / impact on this error, see [1] [2].
After deciding it would probably just be best to give up on it and say we're sticking with chai v4, I had a look in to the compiled "esm" output that changed last year when setting the esm
tsconfig
's module
and moduleResolution
to nodenext
last year to let through a typescript update. But it would appear now that I should have looked closer at the change in the compiled "esm" code back then, as this change to "nodenext" also changed the output of the "esm" build to "cjs". So the output at the moment is just two copies of cjs.
We definitely want to support esm, as the primary target. We might be able to support it optionally, and getting back to that would be a priority (as well as adding a check for this working in the demo, which was only checking that the cjs result was valid), but it would be nice if we could jump straight to esm as our default. We can edit the package.json
to include the changes;
@@ -27,8 +27,9 @@
"files": [
"./lib/**/*"
],
- "main": "./lib/cjs/index.js",
- "types": "./lib/cjs/types/index.d.ts",
+ "main": "./lib/esm/index.mjs",
+ "types": "./lib/esm/types/index.d.ts",
+ "type": "module",
"exports": {
".": {
"import": {
But this still yeilds a TypeError [ERR_UNKNOWN_FILE_EXTENSION]: Unknown file extension ".ts" for /<>/Collatz/javascript/tests/collatzFunction.spec.ts
during the nyc mocha
execution. Following up with specifying TS_NODE_PROJECT='./tsconfig.esm.json'
before the nyc mocha
gets the same error. If we go ahead and add the .mocharc.json
differences recommended by a comment at the end of this;
@@ -1,7 +1,5 @@
{
"extension": ["ts"],
"spec": "./**/*.spec.ts",
- "require": "ts-node/register"
+ "require": "ts-node/register",
+ "loader": "ts-node/esm",
+ "es-module-specifier-resolution": "node"
}
We get the error TS2835: Relative import paths need explicit file extensions in ECMAScript imports when '--moduleResolution' is 'node16' or 'nodenext'. Did you mean '../src/index.js'
. Addressing this by adding .ts
in all relative imports in the tests yields a different error TS5097: An import path can only end with a '.ts' extension when 'allowImportingTsExtensions' is enabled.
. Adding the requested "allowImportingTsExtensions": true
to our base tsconfig
now yields another 10 or so TS2835: Relative import paths need explicit file extensions in ECMAScript imports when '--moduleResolution' is 'node16' or 'nodenext'. Did you mean './XYZ.js'?
. Doing the same thing with all relative imports in the src yields passing tests, but the subsequent tsc
gives us npm ERR! error TS5096: Option 'allowImportingTsExtensions' can only be used when either 'noEmit' or 'emitDeclarationOnly' is set.
.
Whilst googling to try and solve this often yielded very thoroughly answered SO posts such as this one about modules and moduleResolution, the overall wholistic answer came from here. The gist being that typescript wont ever change the name of a module, which includes not renaming a relative import of a .ts
file to a .js
file, but the relative link will only have any meaning in the context of having already been transpiled, so the suggestion to rename things to .js
is not as misleading as it seemed initially, which did a lot of heavy lifting the bury the lead. But yes, relative imports with .js
, for what they will be when they are transpiled, lets it work! However we still need the relative links to .ts
files in our mocha spec files. So we can get around both of these by having a new tsconfig that extends our ems set of options, and add the necessary allowImportingTsExtensions
setting to that one, so that mocha sees it's allowed to import .ts
, our tsc
can still emit ems
code, and wont complain that it can't emit anything with allowImportingTsExtensions
set. Now it's just a matter of getting the demo to be happy with both import and require.