Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DockerCompose: support RPCDaemon local-mode #2392

Merged
merged 1 commit into from
Jul 18, 2021
Merged

DockerCompose: support RPCDaemon local-mode #2392

merged 1 commit into from
Jul 18, 2021

Conversation

mariuspod
Copy link
Contributor

I found out it's possible to set the PID namespace so that both erigon and rpcdaemon docker containers share the same namespace. This has the nice side-effect that it's possible to use --datadir on both containers with a mounted volume because the mdbx lock is not exclusive anymore. This PR is an experimental workaround for running rpcdaemon in dockerized Local-Mode.

I've tested it on Ubuntu 18.04.5 LTS with a USB stick mounted as a volume and passed via XDG_DATA_HOME

I've also changed the default user of the containers to either $UID_GUID env or 1000:1000 as default if it's not set so that permissions are properly handled for the volume.

So far it works pretty stable for me 👷

… Fix some permissions issues in docker containers.
@AskAlexSharov
Copy link
Collaborator

Amazing - I were need this help. I will test it on mac/win.
Do you have any docs link - why this trick helped?

@AskAlexSharov
Copy link
Collaborator

We need try next thing: maybe instead of having host-machine UID - enough just have same UID in 2 containers.

@AskAlexSharov
Copy link
Collaborator

works on mac!

@AskAlexSharov
Copy link
Collaborator

I think we can go ahead with this solution:

  • on win we don't use Makefile (using .ps1 file). so, go ahead without win support is a bit fine. will add later.
  • "using same uid, gid" - doesn't work. need host's uid/gid by some reason.

@AskAlexSharov AskAlexSharov merged commit b69638b into erigontech:devel Jul 18, 2021
@AskAlexSharov AskAlexSharov changed the title Feat: Experimental workaround for dockerized rpcdaemon in Local-Mode DockerCompose: support RPCDaemon local-mode Jul 18, 2021
@AskAlexSharov
Copy link
Collaborator

I updated docs: #2394

@mariuspod
Copy link
Contributor Author

mariuspod commented Jul 18, 2021

@AskAlexSharov
That was fast, thanks for your feedback and merging my fix 🚀

TBH I realized the PID issue by accident because I was running erigon and rpcdaemon in two separate terminals which was fine and then with docker-compose it was giving me the following error when acquiring the lock:

erigon_1      | mdbx_lck_seize:29488 lock-against-without-lck, err 11
erigon_1      | Fatal: Could not open database: mdbx_env_open: resource temporarily unavailable, label: chaindata, trace: 
[
github.com/ledgerwatch/erigon/ethdb/kv.MdbxOpts.Open
github.com/ledgerwatch/erigon/node.OpenDatabase.func1 github.com/ledgerwatch/erigon/node.OpenDatabase
github.com/ledgerwatch/erigon/cmd/utils.MakeChainDatabase main.run Erigon
github.com/urfave/cli.HandleAction
github.com/urfave/cli.(*App).Run main.main runtime.main runtime.goexit
]
erigon_1      | Fatal: Could not open database: mdbx_env_open: resource temporarily unavailable, label: chaindata, trace: 
[
github.com/ledgerwatch/erigon/ethdb/kv.MdbxOpts.Open
github.com/ledgerwatch/erigon/node.OpenDatabase.func1
github.com/ledgerwatch/erigon/node.OpenDatabase
github.com/ledgerwatch/erigon/cmd/utils.MakeChainDatabase main.runErigon
github.com/urfave/cli.HandleAction
github.com/urfave/cli.(*App).Run main.main runtime.main runtime.goexit
]

So I figured it has something to do with the PID namespace and so I just gave it a try and it worked 😂 I added another PR #2397 which reduces the PID namespace to only see the erigon instead of the entire host which is also working for me and is much safer. Could you test this maybe on a mac ?

You should recreate the containers so that PID mode is changed: docker-compose up --force-recreate

I was trying to debug the mdbx_env_open a bit in the core.c but I couldn't find the right way to make changes locally and update them for erigon binary. It seems erigon uses libmdbx from another repo and not from the one in the libmdbx submodule 😕
I wanted to patch this file locally: https://github.com/erthink/libmdbx/blob/master/src/core.c#L12359

What's the usual dev build workflow for rebuilding libmdbx for use in erigon locally ?

Regarding the host's UID_GID that's only a fix for the grafana and prometheus containers permission issues and I added it to all containers for consistency. When the containers are started without a specific user they might be started as root. This changes the file permissions on the mounted volume to root as well which is not desired imo.

@AskAlexSharov
Copy link
Collaborator

“That repo” is mine.
After “make dist” in libmdbx submodule - it producing mdbx.c and mdbx.h
I copy them manually to “that repo”.

@AskAlexSharov
Copy link
Collaborator

You can fork “that repo”
and in Erigon do “go get <your_repo>@latest

@mariuspod
Copy link
Contributor Author

@AskAlexSharov yeah I thought about forking it but I wanted to have a local possibility to patch it.
Is there another way other than make dist and then pushing the changes to my fork like copy locally and re-build erigon ?

@AskAlexSharov
Copy link
Collaborator

Just edit mdbx.c in your fork. And “go get” new version.

You may try: «go mod vendor» in Erigon. It will move all dependencies to vendor folder - and maybe will make them editable.

command: rpcdaemon --datadir /var/lib/erigon --private.api.addr=erigon:9090 --http.addr=0.0.0.0 --http.vhosts=* --http.corsdomain=* --http.api=eth,debug,net
pid: host
volumes:
- ${XDG_DATA_HOME:-~/.local/share}/erigon:/var/lib/erigon
ports:
- "8545:8545"
restart: unless-stopped

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Run lib

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants