-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Monorepo #336
RFC: Monorepo #336
Changes from 1 commit
b49d7fc
3b8caba
f2122d8
d619394
6ef0168
f2a9d81
205ca41
9a1449d
53fe443
5763d16
f88693a
91e056b
3b429c3
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,18 +2,14 @@ | |
|
||
# RFC: Monorepo | ||
|
||
**Status:** 🚧 WIP, comments are welcome nonetheless | ||
|
||
## Reviewers | ||
|
||
- [ ] @zackkrida | ||
- [ ] @sarayourfriend | ||
|
||
## Rationale | ||
|
||
For a comprehensive discussion about the pros, the cons and the counterpoints to each see [discussion](https://github.com/WordPress/openverse/issues/192). This is not the purpose of this RFC. | ||
|
||
This RFC summarily lists the benefits and then, with the twin assumptions of a monorepo being ultimately beneficial and the decision to migrate being finalised in the above discussion, proceeds to go into the implementation details. | ||
For a comprehensive discussion about the pros, the cons and the counterpoints to each see [discussion](https://github.com/WordPress/openverse/issues/192). Some of the more nuanced points are listed below, biased towards the overall benefits of a monorepo to justify the RFC. This RFC also proceeds to go into the implementation details hoping that the benefits are cumulatively enough of an improvement to convince everyone to migrate. | ||
|
||
### Benefits of monorepo | ||
|
||
|
@@ -79,13 +75,17 @@ First we will merge the API and the frontend repos into `WordPress/openverse`. T | |
|
||
1. The merge of two JavaScript codebases provides fertile ground for testing `pnpm` workspaces. | ||
|
||
- It also allows us to merge the browser extension later and split the design system/component library stuff into a separate package. | ||
|
||
1. The API is already organised by stack folders so the `frontend/` directory will fit right in with the others like `api/` and `ingestion_server/`. Similarly the scripts repo is nicely organised in folders, reducing conflicts. | ||
|
||
1. The API and frontend share identical tooling for Git hooks, linting and formatting. We will fight our tools less and encounter minimal friction. | ||
|
||
- The frontend's approach for `pre-commit` inspired the RFC for expaning this type of usage to the API as well! | ||
- The frontend's approach for `pre-commit` expanded this type of usage to the API as well! | ||
|
||
- We're expanding the use of double-quoted strings to JavaScript to further unify our style guides. | ||
|
||
1. The entire system can be integration tested during releases. The real API, populated with test data, can even replace the Talkback server as long as we can turn off all network calls and enable 100% reliable output. | ||
1. The entire system can be integration tested during releases. The real API, populated with test data, can replace the Talkback server as long as we disable network calls and make output deterministic. | ||
|
||
The `WordPress/openverse` repo will absorb the `WordPress/openverse-api` and `WordPress/openverse-frontend` repos. The `WordPress/openverse-catalog` will also be merged, _later_. | ||
|
||
|
@@ -101,6 +101,16 @@ The first step will be to release the API and frontend, call a code freeze on bo | |
|
||
This can prove difficult given how productive our team is, so we will need to channel this productivity towards the catalog in the meantime. I can foresee the end-to-end migration taking one week (ideal scenario) to becoming workable again, and another week (for us to iron out any gaps in the docs and references). | ||
|
||
##### Timeline breakdown | ||
|
||
- Day 1: Merging the repos and resolving conflicts, restoring broken workflows except deploys | ||
- Day 2: Restoring deployment workflows including staging auto-deploy | ||
- Day 3: Transfer of issues from individual repos to monorepo | ||
- Day 4: Documentation fixes | ||
- Day 5: Housekeeping | ||
|
||
The second week is planned as a buffer in case any of these tasks ends up taking more time than a day, if something breaks or if someone falls ill etc. The ideal scenario is that we're completely back next week, the worst one takes two. | ||
|
||
Note that in the transition period nothing will break. The old repos will continue to exists as they are, till we ensure everything works and then we archive the current split repos. | ||
|
||
### Step 1: Merge with histories | ||
|
@@ -232,10 +242,32 @@ With this done, we can archive the API and frontend repo. An optional notice may | |
|
||
#### Combine linting | ||
|
||
All lint steps can be combined in `.pre-commit-config.yaml`. This also simplifies the CI jobs can now be merged. | ||
All lint steps can be combined in `.pre-commit-config.yaml`. This also implies the CI jobs can now be merged. | ||
|
||
1. Remove pre-commit scripts from `frontend/package.json` and the `install-pre-commit.sh` script. | ||
|
||
1. Remove `lint` job from CI, there are plenty of those. | ||
|
||
#### `pnpm` workspace | ||
|
||
1. Move `frontend/.pnpmfile.cjs` outside to the root directory, update the reference to `frontend/package.json`. | ||
|
||
1. Remove `frontend/.npmrc` created earlier because `pnpm` will automatically use the one in the root of the workspace. | ||
|
||
1. Create `pnpm-workspaces.yaml` file, see https://github.com/dhruvkb/monoverse/blob/main/.pre-commit-config.yaml. | ||
|
||
1. Update the `package.json` files, see the following: | ||
|
||
- https://github.com/dhruvkb/monoverse/blob/main/package.json | ||
- https://github.com/dhruvkb/monoverse/blob/main/frontend/package.json | ||
- https://github.com/dhruvkb/monoverse/blob/main/automations/js/package.json | ||
|
||
1. `pnpm i` in the monorepo root. | ||
|
||
1. Update the recipe `pnpm` in `frontend/justfile` to include `--filter "openverse-frontend"`. | ||
|
||
1. `git commit -m "Setup workspaces"` | ||
|
||
### Step 3. Restore workflows | ||
|
||
#### New actions | ||
|
@@ -248,7 +280,29 @@ To clean up the workflows we will define three new actions. The code for all thr | |
|
||
#### Update workflows | ||
|
||
With this done, the development on the API and frontend can continue inside their subdirectories. The development of both parts will be independent. At least until we reach [long-term consolidation](#step-5-long-term-consolidation). | ||
Workflows with major changes: | ||
|
||
- `ci_cd.yml` from the API will absorb `ci.yml` from the frontend | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What approach will we take to avoid running API tests when only frontend code is changed and vice-versa? Combining the CI workflows will preclude us from using the workflow There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can use the |
||
- `lint.yml` will be deleted | ||
|
||
Updates: | ||
|
||
- `migration_safety_warning.yml` | ||
- `generate_pot.yml` | ||
|
||
With this done, the development on the API and frontend can continue inside their subdirectories. The development of both parts will be independent, at least until we reach [long-term consolidation](#step-5-long-term-consolidation). | ||
|
||
#### Deployment | ||
|
||
##### Staging | ||
|
||
The soon-to-be-ECS-based API and ECS-based frontend will continue to deploy to staging via the CI + CD pipeline, with deployment starting as soon as all CI checks have passed. They will use similar code as the frontend auto-deploy for staging used currently. | ||
|
||
These will be separate jobs with specific [path-based filters](https://github.com/dorny/paths-filter). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What is the benefit of using this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This actions allows us to filter jobs instead of applying the filter to the entire workflow. This allows us to have a complete workflow where the jobs can be made to run based on the outcome of other jobs with If these jobs were in separate workflows, I don't think it's possible to run a workflow's jobs if jobs in a different workflow have passed. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is possible to have cross-workflow dependencies, actually: https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run I only have two concerns with using the actions:
If you think either of these are not a concern, then that is fine. The first should be documented as an expectation and accepted trade off. I think the second would bite us later on, but I can accept that I may be being too cautious. One more thing that crossed my mind as I was writing this is whether our concurrency settings will need to change for jobs. My hunch is no, but it's worth looking into to confirm as they can be quite nasty to think through. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The annoying to maintain code might be true, but it can be overcome by abstracting the Point 2 could be an issue as it will spin up a job and cancel it if the criterion fails, an alternative to that could be to have a job that only matches the condition of the paths. Let's say jobs called "is_api", "is_frontend" etc. The other jobs can There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I hadn't considered that, using a single (or handful of) job(s) that determine(s) which part of the project is being changed. It could actually be more flexible as the jobs could be |
||
|
||
##### Production | ||
|
||
For production, we will not be able to use GitHub Releases and will have to use a manually-triggered workflow to build the assets and tag them. The tag can be set via a workflow input (simple) or can be calculated based on the update scope of major, minor or patch (not as simple). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Another way to simplify this is for us to switch to SHA based tags for release deployment. Rather than creating a new build, just use the build tagged with the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The slight issue with SHA tags would be that it's harder to identify versioning with them. Given two hashes, it's not possible to say which is the newer version. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, good point! |
||
|
||
### Step 3. Housekeeping and DX cleanup | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love this! 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would have to make sure that the API response types never differ from the sample data, though. For example, if we say that the license URL is always present, and it's present in the test data, but absent in the real API responses, that would break the app even though the tests pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Next step would be integrating the catalog so the supplier of the data would also be a part of the tests 👍.