Skip to content

Conversation

@jiangphcn
Copy link
Contributor

@jiangphcn jiangphcn commented Feb 10, 2020

Overview

Porting spidermonkey60 support to fdb-layer

Testing recommendations

Related Issues or Pull Requests

The codes which are mainly ported are
#2345

Checklist

  • Code is written and works correctly
  • Changes are covered by tests
  • Any new configurable parameters are documented in rel/overlay/etc/default.ini
  • A PR for documentation changes has been made in https://github.com/apache/couchdb-documentation

davisp and others added 30 commits July 31, 2019 11:55
Most of these tests are for quorum and clustered response handling which
will no longer exist with FoundationDB. Eventually we'll want to go
through these and pick out anything that is still applicable and ensure
that we re-add them to the new test suite.
This provides a base implementation of a fabric API backed by
FoundationDB. While a lot of functionality is provided there are a
number of places that still require work. An incomplete list includes:

  1. Document bodies are currently a single key/value
  2. Attachments are stored as a range of key/value pairs
  3. There is no support for indexing
  4. Request size limits are not enforced directly
  5. Auth is still backed by a legacy CouchDB database
  6. No support for before_doc_update/after_doc_read
  7. Various implementation shortcuts need to be expanded for full API
     support.
This provides a good bit of code coverage for the new implementation.
We'll want to expand this to include relevant tests from the previous
fabric test suite along with reading through the various other tests and
ensuring that we cover the API as deeply as is appropriate for this
layer.
This is not an exhaustive port of the entire chttpd API. However, this
is enough to support basic CRUD operations far enough that replication
works.
This still holds all attachment data in RAM which we'll have to revisit
at some point.
When uploading an attachment we hadn't yet flushed data to FoundationDB
which caused the md5 to be empty. The `new_revid` algorithm then
declared that was because it was an old style attachment and thus our
new revision would be a random number.

This fix just flushes our attachments earlier in the process of updating
a document.
I was accidentally skipping this step around properly
serializing/deserializing attachments.

Note to self: If someon specifies attachment headers this will likely
break when we attempt to pack the value tuple here.
The older chttpd/fabric split configured filters as one step in the
coordinator instead of within each RPC worker.
This fixes the behavior when validating a document update that is
recreating a previously deleted document. Before this fix we were
sending a document body with `"_deleted":true` as the existing document.
However, CouchDB behavior expects the previous document passed to VDU's
to be `null` in this case.
This was a remnant before we used a version per database.
This changes `chttpd_auth_cache` to use FoundationDB to back the
`_users` database including the `before_doc_update` and `after_doc_read`
features.
RFC: apache/couchdb-documentation#409

Main API is in the `couch_jobs` module. Additional description of internals is
in the README.md file.
Neither partitioned databases or shard splitting will exist in a
FoundationDB layer.
This adds the mapping of CouchDB start/end keys and so on to the similar
yet slightly different concepts in FoundationDB. The handlers for
`_all_dbs` and `_all_docs` have been udpated to use this new logic.
The existing logic around return codes and term formats is labyrinthine.
This is the result of much trial and error to get the new logic to
behave exactly the same as the previous implementation.
Simple function change to `fabric2_db:name/1`
Previously I was forgetting to keep the previous history around which
ended up limiting the revision depth to two.
The old test got around this by using couch_httpd_auth cache in its
tests which is fairly odd given that we run chttpd_auth_cache in
production. This fixes that mistake and upgrades chttpd_auth_cache so
that it works in the test scenario of changing the authentication_db
configuration.
eiri and others added 24 commits January 28, 2020 11:03
This ets table was a holdover from when couch_expiring_cache was a non-
library OTP application. It is unused, and would prevent multiple users
of the library in the same project.
There were couple of hacks in test/elixir/lib/couch.ex
We've got changes needed to remove them into httpotion 3.1.3.
The changes were introduced in:
- valpackett/httpotion#118
- valpackett/httpotion#130
Co-Authored-By: Jan Lehnardt <jan@apache.org>
Co-Authored-By: Paul J. Davis <paul.joseph.davis@gmail.com>
* fix: avoid segfaults, patch by @davisp

* fix: build against sm60 on mac needs extra compiler flags
We've had a number of segfaults in the `make javascript` test suite. The
few times we've been able to get core dumps all appear to indicate
something wrong in the JIT compiler. Disabling the JIT compilers appears
to prevent these segfaults.
Apparently SpiderMonkey 60 changed the behavior of OOM errors to not
exit the VM. This updates the SpiderMonkey 60 implementation to match
that behavior.
This test is actually checking the behvior of an OOM in `couchjs` now
since we lifted the OS process timeout limit.
This changes the couchjs OOM test so that it will trigger more reliably
on SpiderMonkey 60. It appears that newer SpiderMonkeys are better at
conserving memory usage which takes this test longer to trigger.
This is a recurrence of #1450 caused by ec416c3
(SpiderMonkey 60 PR), where a case clause in
rebar.config.script lacks a match when configure
has not yet been run yet.
and max as macros through a #define

Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
@wohali
Copy link
Member

wohali commented Feb 10, 2020

@jiangphcn you're going to want https://github.com/apache/couchdb/pull/2534/files#diff-0c65e7bb4bafbb8cf147cc0001ccd436 too, at least L202-207 if not the rest.

but if prototype/fdb-layer is going to hit master soon, now that 3.x has forked, is this necessray?

Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
@jiangphcn
Copy link
Contributor Author

@wohali thanks for your tips. I added L202-207 for now. This PR arises when working with @garrensmith on one issue in his Mac environment. We got segment fault on Catalina with spidermonkey 1.8.5 (might be related to 32 bit compatibility issue)and work fine after switching to 60. Knowing that this doesn't happen in every development environment, but it is helpful to add spidermonkey 60 support in fdb-layer branch to continue.

Thanks for forking 3.x, and thus we can have prototype/fdb-layer to hit master soon. In master branch, knowing that there is spidermonkey 60 support already. However, still need to merge codes from fdb-layer branch to master branch. If my understanding is correct, this looks like that two small branch rivers (fdb-layer branch and master branch) converge into one big river, and we need to make big river to work fine. The commits on these 2 branches need to be reviewed and some commit in master might need to be replaced/rewritten by commits in fdb-layer branch. I am quite open to have this PR to merge or not. For now, it can leave chance to have spidermonkey 60 support to work with fdb-layer ealier.

@davisp davisp force-pushed the prototype/fdb-layer branch from b3bd36b to bdd0578 Compare March 2, 2020 22:53
@jiangphcn jiangphcn closed this Aug 5, 2020
@wohali wohali deleted the spidermonkey60-porting branch October 21, 2020 19:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.