Skip to content

Conversation

@momentary-lapse
Copy link
Contributor

Addresses: #4979

.await;

// TODO make compatible with ActualDbPool
db_pool.pull_immutable().await
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created this WIP PR to share the progress and the issue I'm stuck with currently. The crate I use operates with its own structure wrapping connection pools: code
And we have our own ActualDbPool. They are kinda same, but it's not obvious for me how to correctly convert one to another.
I had an idea to make ActualDbPool a enum with two possible values: RegularPool and ReusablePool, but stuck on trying to adapt stuff like LemmyContext, which also requires pool struct to be clone-able (and ReusablePool is not). And it seems a lot of changes to the main codebase for purely test changes.
Do you folks have any ideas how to manage that? Or should I stick to the initial plan without using this library?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our ActualDbPool is just a type alias for deadpool Pool<AsyncPgConnection>.

Their crate should be able to work with deadpool pools, but I'm not familiar with how to plug that into their crate... you'll have to ask them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I see. I returned to this issue today after a week of a break. I'm in contact with the db-pool author and they're helping to understand a lot of moments and really willing to collaborate, so i think we'll make this work.

I'd like to clarify one moment: do we want build_db_pool_for_tests to return still ActualDbPool? db-pool has its own wrapper ReusableConnectionPool which works like a deadpool Pool, but a bit different and needs adaptation. And it might be easier to adapt tests for working with ReusableConnectionPool than converting ReusableConnectionPool to ActualDbPool

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The return type of build_db_pool_for_tests may be changed. Also, a DbPool variant may be added if needed.

#[derive(Clone)]
pub struct LemmyContext {
pool: ActualDbPool,
pool: ContextPool,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the moment which currently blocks me, and i think it's better to consult with you again. LemmyContext structure must be cloneable, therefore all the fields, therefore the pool. But unfortunately, reusable pool from db-pool crate is not, and i don't have access to its fields to implement the trait here.
But before asking db-pool developer, i'd like to be sure we really need this pool cloning stuff, especially for the tests. Cloning the pool seems a bit strange to me, but i may miss something. I'm looking at the code now, but maybe you folks already have some insights on this

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrap it in Arc for now.

@momentary-lapse
Copy link
Contributor Author

Update: I'm working on the topic; cannot devote much time for it, but it slowly going forward, and i keep the code in the branch up-to-date. I connected db-pool crate to our tests, and reworked most of them. Currently have a runtime error, planning to look at it and fix this week.
After this, what is left is to change a few tests which are using build_db_pool function.


pg_ctl stop --silent
rm -rf $PGDATA
docker-compose -f docker/docker-compose-test.yml down
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should remain unchanged so that it uses Postgres from a local folder instead of starting a Docker container because starting a Docker container requires root. It is possible to allow docker commands without root, but then every application can execute docker commands and easily get root permissions.

This will require changes to db_pool::PrivilegedPostgresConfig. Honestly that file is unnecessary because it takes the parts from a db url and then converts it back to a db url. So its best to remove PrivilegedPostgresConfig entirely, and change the first param of DieselAsyncPostgresBackend::new from privileged_config: PrivilegedPostgresConfig to db_url: Url. The logic in privileged_database_connection_url() and restricted_database_connection_url() can be handled by Url.set_password() etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I agree. I just skipped this issue in order to focus on the main part for now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding PrivilegedPostgresConfig, it looks like main purpose of it to be able to build connection string to multiple databases easily, as required by db pool approach (methods like privileged_database_connection_ur take db_name as parameter). In case of lemmy db url we already have database in connection string, and if fear it would look ugly to remove/substitute it under the hood of DieselAsyncPostgresBackend. So, instead tried to add options field in PrivilegedPostgresConfig. And it worked, finally the test test impls::tests::post_and_comment_vote_views has passed. The changes are WIP, i'll check other tests first and then will do a cleanup

OnceCell::const_new();
let db_pool = POOL
.get_or_init(|| async {
let conn_string = SETTINGS.get_database_url();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to call SETTINGS.get_database_url_with_options()?; here and then pass that directly into DieselAsyncPostgresBackend::new() (as explained in my other comment). Because the db url includes the option lemmy.protocol_and_hostname. At the moment triggers are failing with unrecognized configuration parameter "lemmy.protocol_and_hostname" because that option is missing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks much, was about to ask about that. I really don't want to give up on that feature. Didn't have time/mental capacity to work on it consistently, but now it's better. Recently made a workaround for db-pool permissions, which seems to work, so there's a slow progress. And trying to keep it up with regular lemmy updates

db-pool = { git = "https://github.com/momentary-lapse/db-pool.git", branch = "edition2021-test-superuser", features = [
"diesel-async-postgres",
"diesel-async-deadpool",
] }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a dev dependency. But dont worry about it for now, what matters is to get the parallel tests working.

@dessalines
Copy link
Member

What's this needing still? This is one of our oldest PRs.

@momentary-lapse
Copy link
Contributor Author

  1. Make the config changes mentioned above by @Nutomic

  2. Check that tests with tokio_shared_rt work. I added it to db_views crate.

  3. Add that tokio shared runtime library to other tests.

Basically, that's it. But other errors might appear.

But yeah, apologies for horrendously delaying this PR. It required communication with another dev and adapting their library, but it's still too long

@dessalines
Copy link
Member

Cool, no probs. I just know its a pain to maintain these things when we do a lot of refactor PRs, so its best to try to get the oldest ones merged first if possible.

@momentary-lapse
Copy link
Contributor Author

Update: managed to adapt the tests, but some of them fail. The reasons seem to be different, and I start with lemmy_db_views_post, where tests use regular db pool instead of dedicated one for tests

@Nutomic
Copy link
Member

Nutomic commented Jan 1, 2026

The initial error shown in CI is from lemmy_db_views_post, but the output is not complete. You need to download the logfile to see some earlier errors:


thread 'site::mod_log::tests::test_mod_remove_or_restore_data' (14896) panicked at /woodpecker/src/github.com/LemmyNet/lemmy/.cargo_home/git/checkouts/db-pool-42b0ece7055f5868/68d84b8/src/async/db_pool.rs:189:30:
connection pool cleaning must succeed: Query(DatabaseError(Unknown, "migrations must be managed using lemmy_server instead of diesel CLI"))
stack backtrace:
   0: __rustc::rust_begin_unwind
   1: core::panicking::panic_fmt
   2: core::result::unwrap_failed
   3: core::result::Result<T,E>::expect
   4: db_pool::async::db_pool::DatabasePoolBuilder::create_database_pool::{{closure}}::{{closure}}::{{closure}}
   5: <core::pin::Pin<P> as core::future::future::Future>::poll
   6: db_pool::async::object_pool::ObjectPool<T>::pull::{{closure}}
   7: db_pool::async::db_pool::DatabasePool<B>::pull_immutable::{{closure}}
   8: lemmy_diesel_utils::connection::build_db_pool_for_tests::{{closure}}
   9: lemmy_api_utils::context::LemmyContext::init_test_federation_config::{{closure}}
  10: lemmy_api_utils::context::LemmyContext::init_test_context::{{closure}}
  11: lemmy_api::site::mod_log::tests::test_mod_remove_or_restore_data::{{closure}}
  12: <core::pin::Pin<P> as core::future::future::Future>::poll
  13: tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}::{{closure}}
  14: <core::future::poll_fn::PollFn<F> as core::future::future::Future>::poll
  15: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
  16: tokio::runtime::park::CachedParkThread::block_on
  17: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
  18: tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}
  19: tokio::runtime::context::runtime::enter_runtime
  20: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
  21: tokio::runtime::runtime::Runtime::block_on_inner
  22: tokio::runtime::runtime::Runtime::block_on
  23: lemmy_api::site::mod_log::tests::test_mod_remove_or_restore_data
  24: lemmy_api::site::mod_log::tests::test_mod_remove_or_restore_data::{{closure}}
  25: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

thread 'site::mod_log::tests::test_mod_remove_or_restore_data' (14896) panicked at /woodpecker/src/github.com/LemmyNet/lemmy/.cargo_home/git/checkouts/db-pool-42b0ece7055f5868/68d84b8/src/async/conn_pool.rs:27:9:
can call blocking only when running on the multi-threaded runtime
stack backtrace:
   0:     0x55bd377417a2 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::haa87a551a4affa55
   1:     0x55bd377561ff - core::fmt::write::h80461e1e45e4fdd2
   2:     0x55bd37708f01 - std::io::Write::write_fmt::hbf5cebcad70aeb70
   3:     0x55bd37717482 - std::sys::backtrace::BacktraceLock::print::hf67a46baa621998e
   4:     0x55bd3771d9ef - std::panicking::default_hook::{{closure}}::h391aa815d5e47ec8
   5:     0x55bd3771d881 - std::panicking::default_hook::hd6fdcf2489bb807d
   6:     0x55bd3412ec7e - test::test_main_with_exit_callback::{{closure}}::h9778e872998a3bb6
   7:     0x55bd3771e15f - std::panicking::panic_with_hook::h185ddfb86bf14d73
   8:     0x55bd3771df0a - std::panicking::panic_handler::{{closure}}::had89ddd01b6112c9
   9:     0x55bd377175b9 - std::sys::backtrace::__rust_end_short_backtrace::h5d0fc36eef7265ea
  10:     0x55bd376fb99d - __rustc[eb8946e36839644a]::rust_begin_unwind
  11:     0x55bd37762400 - core::panicking::panic_fmt::h92c8e5abe71dd8d1
  12:     0x55bd36c503b8 - core::panicking::panic_display::hf5836e8b737d947e
  13:     0x55bd3682783d - tokio::runtime::scheduler::multi_thread::worker::block_in_place::h6380db0070847cf3
  14:     0x55bd368e73c7 - tokio::runtime::scheduler::block_in_place::block_in_place::h700614781dd60711
  15:     0x55bd367d2a87 - tokio::task::blocking::block_in_place::h2d1fbd5d5ef9e244
  16:     0x55bd3681e51b - <db_pool::async::conn_pool::ConnectionPool<B> as core::ops::drop::Drop>::drop::h3a62678511067b3f
  17:     0x55bd3680fa71 - core::ptr::drop_in_place<db_pool::async::conn_pool::ConnectionPool<db_pool::async::backend::postgres::diesel::DieselAsyncPostgresBackend<db_pool::async::backend::common::pool::diesel::deadpool::DieselDeadpool>>>::hd250b20bd1cc8bbe
  18:     0x55bd3680fbe7 - core::ptr::drop_in_place<db_pool::async::conn_pool::ReusableConnectionPool<db_pool::async::backend::postgres::diesel::DieselAsyncPostgresBackend<db_pool::async::backend::common::pool::diesel::deadpool::DieselDeadpool>>>::h04e8f2a9e1ed5b04
  19:     0x55bd36835458 - db_pool::async::db_pool::DatabasePoolBuilder::create_database_pool::{{closure}}::{{closure}}::{{closure}}::hf9445ced20b3b34c
  20:     0x55bd367e4be9 - <core::pin::Pin<P> as core::future::future::Future>::poll::h69b19cd640a1b586
  21:     0x55bd3682cf3c - db_pool::async::object_pool::ObjectPool<T>::pull::{{closure}}::hbe39a7e916e3c5ce
  22:     0x55bd3683577b - db_pool::async::db_pool::DatabasePool<B>::pull_immutable::{{closure}}::hb31384f0a45b5b8c
  23:     0x55bd36899d6f - lemmy_diesel_utils::connection::build_db_pool_for_tests::{{closure}}::h81de2850bf438fa0
  24:     0x55bd33fb1d76 - lemmy_api_utils::context::LemmyContext::init_test_federation_config::{{closure}}::h3e5b198ee8ec5f58
  25:     0x55bd33fb1aa5 - lemmy_api_utils::context::LemmyContext::init_test_context::{{closure}}::h81fd29e02cc957b1
  26:     0x55bd340bc845 - lemmy_api::site::mod_log::tests::test_mod_remove_or_restore_data::{{closure}}::h20b7ff10228e9017
  27:     0x55bd33f78608 - <core::pin::Pin<P> as core::future::future::Future>::poll::h0f7ff3f32ebdacb3
  28:     0x55bd340f64b5 - tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}::{{closure}}::h00c1ac083a537ad3
  29:     0x55bd33f56bed - <core::future::poll_fn::PollFn<F> as core::future::future::Future>::poll::h2d20cb72eb75cdb3
  30:     0x55bd340eb597 - tokio::runtime::park::CachedParkThread::block_on::{{closure}}::h06a714836dfc08bf
  31:     0x55bd340eac68 - tokio::runtime::park::CachedParkThread::block_on::h68126cbda9fd0634
  32:     0x55bd340fa8a2 - tokio::runtime::context::blocking::BlockingRegionGuard::block_on::hf69589445a5e3aed
  33:     0x55bd340f63a6 - tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}::hd6d6a392894cc9bf
  34:     0x55bd3401c379 - tokio::runtime::context::runtime::enter_runtime::ha736ed78faa2cd12
  35:     0x55bd340f5d37 - tokio::runtime::scheduler::current_thread::CurrentThread::block_on::he22fe60f47dca89c
  36:     0x55bd34096023 - tokio::runtime::runtime::Runtime::block_on_inner::h7592bcf0810caf2e
  37:     0x55bd340962ba - tokio::runtime::runtime::Runtime::block_on::hf8046d7d5bd8cca2
  38:     0x55bd33fcc767 - lemmy_api::site::mod_log::tests::test_mod_remove_or_restore_data::h271c06af2306f4a7
  39:     0x55bd340bc498 - lemmy_api::site::mod_log::tests::test_mod_remove_or_restore_data::{{closure}}::h19f1f8573460ab11
  40:     0x55bd340005c6 - core::ops::function::FnOnce::call_once::hf500cee3476ab119
  41:     0x55bd3412ea3b - test::__rust_begin_short_backtrace::h204541656cf312c3
  42:     0x55bd34144695 - test::run_test::{{closure}}::h0804340aebb2c94e
  43:     0x55bd3411ae94 - std::sys::backtrace::__rust_begin_short_backtrace::hd358f13a627bfe6c
  44:     0x55bd3411e80a - core::ops::function::FnOnce::call_once{{vtable.shim}}::h7d9f867e7de29aae
  45:     0x55bd377127af - std::sys::thread::unix::Thread::new::thread_start::h10345b7e8309cb92
  46:     0x7f71c7a1ab7b - <unknown>
  47:     0x7f71c7a985f0 - __clone
  48:                0x0 - <unknown>
thread 'site::mod_log::tests::test_mod_remove_or_restore_data' (14896) panicked at library/core/src/panicking.rs:233:5:
panic in a destructor during cleanup
thread caused non-unwinding panic. aborting.
error: test failed, to rerun pass `-p lemmy_api --lib`

Caused by:
  process didn't exit successfully: `/woodpecker/src/github.com/LemmyNet/lemmy/target/debug/deps/lemmy_api-2eed044fd756c37a` (signal: 6, SIGABRT: process abort signal)
     Running unittests src/lib.rs (target/debug/deps/lemmy_api_common-16db5b0d5028573a)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running unittests src/lib.rs (target/debug/deps/lemmy_api_crud-1efedcb172173591)

running 7 tests
test site::tests::test_not_zero ... ok
test site::create::tests::test_validate_valid_create_payload ... ok
test site::tests::test_application_question_check ... ok
test site::tests::test_site_default_post_listing_type_check ... ok
test site::update::tests::test_validate_valid_update_payload ... ok
test site::create::tests::test_validate_invalid_create_payload ... ok
test site::update::tests::test_validate_invalid_update_payload ... ok

test result: ok. 7 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.02s

     Running unittests src/lib.rs (target/debug/deps/lemmy_api_routes-778b458979d1fcce)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running unittests src/lib.rs (target/debug/deps/lemmy_api_routes_v3-d5b894d32bfe8f59)

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

     Running unittests src/lib.rs (target/debug/deps/lemmy_api_utils-ddf42747d93d366d)

running 13 tests
test claims::tests::test_should_not_validate_user_token_after_password_change ... ok

thread 'notify::tests::read_private_messages' (14917) panicked at /woodpecker/src/github.com/LemmyNet/lemmy/.cargo_home/git/checkouts/db-pool-42b0ece7055f5868/68d84b8/src/async/db_pool.rs:189:30:

thread 'notify::tests::read_private_messages' (14917) panicked at library/core/src/panicking.rs:233:5:
panic in a destructor during cleanup
thread caused non-unwinding panic. aborting.
connection pool cleaning must succeed: Query(DatabaseError(Unknown, "migrations must be managed using lemmy_server instead of diesel CLI"))
stack backtrace:
   0: __rustc::rust_begin_unwind
   1: core::panicking::panic_fmt
   2: core::result::unwrap_failed
   3: core::result::Result<T,E>::expect
   4: db_pool::async::db_pool::DatabasePoolBuilder::create_database_pool::{{closure}}::{{closure}}::{{closure}}
   5: <core::pin::Pin<P> as core::future::future::Future>::poll
   6: db_pool::async::object_pool::ObjectPool<T>::pull::{{closure}}
   7: db_pool::async::db_pool::DatabasePool<B>::pull_immutable::{{closure}}
   8: lemmy_diesel_utils::connection::build_db_pool_for_tests::{{closure}}
   9: lemmy_api_utils::context::LemmyContext::init_test_federation_config::{{closure}}
  10: lemmy_api_utils::context::LemmyContext::init_test_context::{{closure}}
  11: lemmy_api_utils::notify::tests::read_private_messages::{{closure}}
  12: <core::pin::Pin<P> as core::future::future::Future>::poll
  13: <core::pin::Pin<P> as core::future::future::Future>::poll
  14: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}}
  15: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}
  16: tokio::runtime::scheduler::current_thread::Context::enter
  17: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}
  18: tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}}
  19: tokio::runtime::context::scoped::Scoped<T>::set
  20: tokio::runtime::context::set_scheduler::{{closure}}
  21: std::thread::local::LocalKey<T>::try_with
  22: std::thread::local::LocalKey<T>::with
  23: tokio::runtime::context::set_scheduler
  24: tokio::runtime::scheduler::current_thread::CoreGuard::enter
  25: tokio::runtime::scheduler::current_thread::CoreGuard::block_on
  26: tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}
  27: tokio::runtime::context::runtime::enter_runtime
  28: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
  29: tokio::runtime::runtime::Runtime::block_on_inner
  30: tokio::runtime::runtime::Runtime::block_on
  31: lemmy_api_utils::notify::tests::read_private_messages
  32: lemmy_api_utils::notify::tests::read_private_messages::{{closure}}
  33: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

thread 'notify::tests::read_private_messages' (14917) panicked at /woodpecker/src/github.com/LemmyNet/lemmy/.cargo_home/git/checkouts/db-pool-42b0ece7055f5868/68d84b8/src/async/conn_pool.rs:27:9:
can call blocking only when running on the multi-threaded runtime
stack backtrace:
   0:     0x55cf49fb3022 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::haa87a551a4affa55
   1:     0x55cf49fc7e5f - core::fmt::write::h80461e1e45e4fdd2
   2:     0x55cf49f7a3e1 - std::io::Write::write_fmt::hbf5cebcad70aeb70
   3:     0x55cf49f88b52 - std::sys::backtrace::BacktraceLock::print::hf67a46baa621998e
   4:     0x55cf49f8f0bf - std::panicking::default_hook::{{closure}}::h391aa815d5e47ec8
   5:     0x55cf49f8ef51 - std::panicking::default_hook::hd6fdcf2489bb807d
   6:     0x55cf470b9b2e - test::test_main_with_exit_callback::{{closure}}::h9778e872998a3bb6
   7:     0x55cf49f8f82f - std::panicking::panic_with_hook::h185ddfb86bf14d73
   8:     0x55cf49f8f5da - std::panicking::panic_handler::{{closure}}::had89ddd01b6112c9
   9:     0x55cf49f88c89 - std::sys::backtrace::__rust_end_short_backtrace::h5d0fc36eef7265ea
  10:     0x55cf49f6ce7d - __rustc[eb8946e36839644a]::rust_begin_unwind
  11:     0x55cf49fd4060 - core::panicking::panic_fmt::h92c8e5abe71dd8d1
  12:     0x55cf494485e8 - core::panicking::panic_display::hf5836e8b737d947e
  13:     0x55cf4900d43d - tokio::runtime::scheduler::multi_thread::worker::block_in_place::h6380db0070847cf3
  14:     0x55cf490cd037 - tokio::runtime::scheduler::block_in_place::block_in_place::h700614781dd60711
  15:     0x55cf48fd3337 - tokio::task::blocking::block_in_place::h2d1fbd5d5ef9e244
  16:     0x55cf4900410b - <db_pool::async::conn_pool::ConnectionPool<B> as core::ops::drop::Drop>::drop::h3a62678511067b3f
  17:     0x55cf48ff5661 - core::ptr::drop_in_place<db_pool::async::conn_pool::ConnectionPool<db_pool::async::backend::postgres::diesel::DieselAsyncPostgresBackend<db_pool::async::backend::common::pool::diesel::deadpool::DieselDeadpool>>>::hd250b20bd1cc8bbe
  18:     0x55cf48ff57d7 - core::ptr::drop_in_place<db_pool::async::conn_pool::ReusableConnectionPool<db_pool::async::backend::postgres::diesel::DieselAsyncPostgresBackend<db_pool::async::backend::common::pool::diesel::deadpool::DieselDeadpool>>>::h04e8f2a9e1ed5b04
  19:     0x55cf4901b058 - db_pool::async::db_pool::DatabasePoolBuilder::create_database_pool::{{closure}}::{{closure}}::{{closure}}::hf9445ced20b3b34c
  20:     0x55cf48fc1919 - <core::pin::Pin<P> as core::future::future::Future>::poll::h69b19cd640a1b586
  21:     0x55cf49012b3c - db_pool::async::object_pool::ObjectPool<T>::pull::{{closure}}::hbe39a7e916e3c5ce
  22:     0x55cf4901b37b - db_pool::async::db_pool::DatabasePool<B>::pull_immutable::{{closure}}::hb31384f0a45b5b8c
  23:     0x55cf4907fc0f - lemmy_diesel_utils::connection::build_db_pool_for_tests::{{closure}}::h81de2850bf438fa0
  24:     0x55cf46e5d3d6 - lemmy_api_utils::context::LemmyContext::init_test_federation_config::{{closure}}::hdb85dc83d2ccc603
  25:     0x55cf46e5d111 - lemmy_api_utils::context::LemmyContext::init_test_context::{{closure}}::hc143783ac5821826
  26:     0x55cf46d6277d - lemmy_api_utils::notify::tests::read_private_messages::{{closure}}::h3fe22326acb35679
  27:     0x55cf46ef5f98 - <core::pin::Pin<P> as core::future::future::Future>::poll::h1435ba9d82b556fa
  28:     0x55cf46ef5fe7 - <core::pin::Pin<P> as core::future::future::Future>::poll::h32166729adfee44b
  29:     0x55cf46f930b7 - tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}}::h37406a6de5cc15f0
  30:     0x55cf46f93003 - tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::h7cd333dda3d55287
  31:     0x55cf46f9005d - tokio::runtime::scheduler::current_thread::Context::enter::ha8cba5d71429bc30
  32:     0x55cf46f91a3d - tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::h0ce8833102662529
  33:     0x55cf46f9169b - tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}}::h18eda9808f5c9d9b
  34:     0x55cf46e00730 - tokio::runtime::context::scoped::Scoped<T>::set::h84221a19ee79a50e
  35:     0x55cf46ec9053 - tokio::runtime::context::set_scheduler::{{closure}}::h6c8eefbadac09464
  36:     0x55cf46f936f3 - std::thread::local::LocalKey<T>::try_with::h80444d30b6f1175a
  37:     0x55cf46f93217 - std::thread::local::LocalKey<T>::with::h62978d5a8c19d4c2
  38:     0x55cf46ec8fab - tokio::runtime::context::set_scheduler::h84aeedc30d013129
  39:     0x55cf46f913fe - tokio::runtime::scheduler::current_thread::CoreGuard::enter::heee2037c3b3f37dd
  40:     0x55cf46f916ff - tokio::runtime::scheduler::current_thread::CoreGuard::block_on::h366f6264d7a4b3a8
  41:     0x55cf46f8f462 - tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}::hb5eb772b544efca2
  42:     0x55cf46ef8929 - tokio::runtime::context::runtime::enter_runtime::h08ed0c292b4cc9fe
  43:     0x55cf46f8f227 - tokio::runtime::scheduler::current_thread::CurrentThread::block_on::hee404c8a442f35ae
  44:     0x55cf46dac3c3 - tokio::runtime::runtime::Runtime::block_on_inner::h8651df6f9995fdf0
  45:     0x55cf46dac65a - tokio::runtime::runtime::Runtime::block_on::h11d43148f072bbe4
  46:     0x55cf46e413d7 - lemmy_api_utils::notify::tests::read_private_messages::h88806b7077c5043b
  47:     0x55cf46d62528 - lemmy_api_utils::notify::tests::read_private_messages::{{closure}}::h3edb0f8f5f528f3b
  48:     0x55cf47019086 - core::ops::function::FnOnce::call_once::h75f8470e31d56602
  49:     0x55cf470b98eb - test::__rust_begin_short_backtrace::h204541656cf312c3
  50:     0x55cf470cf545 - test::run_test::{{closure}}::h0804340aebb2c94e
  51:     0x55cf470a5d44 - std::sys::backtrace::__rust_begin_short_backtrace::hd358f13a627bfe6c
  52:     0x55cf470a96ba - core::ops::function::FnOnce::call_once{{vtable.shim}}::h7d9f867e7de29aae
  53:     0x55cf49f83e7f - std::sys::thread::unix::Thread::new::thread_start::h10345b7e8309cb92
  54:     0x7f1ad223eb7b - <unknown>
  55:     0x7f1ad22bc5f0 - __clone
  56:                0x0 - <unknown>
error: test failed, to rerun pass `-p lemmy_api_utils --lib`

Caused by:
  process didn't exit successfully: `/woodpecker/src/github.com/LemmyNet/lemmy/target/debug/deps/lemmy_api_utils-ddf42747d93d366d` (signal: 6, SIGABRT: process abort signal)
     Running unittests src/lib.rs (target/debug/deps/lemmy_apub-0e393ec31e9fdbad)

running 7 tests
test http::community::tests::test_get_local_only_community ... ok
test http::community::tests::test_get_deleted_community ... ok
test protocol::collections::tests::test_parse_lemmy_collections ... ok
test protocol::collections::tests::test_parse_mastodon_collections ... ok

thread 'http::community::tests::test_outbox_deleted_user' (14927) panicked at library/core/src/panicking.rs:233:5:
panic in a destructor during cleanup
thread caused non-unwinding panic. aborting.

thread 'http::community::tests::test_outbox_deleted_user' (14927) panicked at /woodpecker/src/github.com/LemmyNet/lemmy/.cargo_home/git/checkouts/db-pool-42b0ece7055f5868/68d84b8/src/async/db_pool.rs:189:30:
connection pool cleaning must succeed: Query(DatabaseError(Unknown, "migrations must be managed using lemmy_server instead of diesel CLI"))
stack backtrace:
   0: __rustc::rust_begin_unwind
   1: core::panicking::panic_fmt
   2: core::result::unwrap_failed
   3: core::result::Result<T,E>::expect
   4: db_pool::async::db_pool::DatabasePoolBuilder::create_database_pool::{{closure}}::{{closure}}::{{closure}}
   5: <core::pin::Pin<P> as core::future::future::Future>::poll
   6: db_pool::async::object_pool::ObjectPool<T>::pull::{{closure}}
   7: db_pool::async::db_pool::DatabasePool<B>::pull_immutable::{{closure}}
   8: lemmy_diesel_utils::connection::build_db_pool_for_tests::{{closure}}
   9: lemmy_api_utils::context::LemmyContext::init_test_federation_config::{{closure}}
  10: lemmy_api_utils::context::LemmyContext::init_test_context::{{closure}}
  11: lemmy_apub::http::community::tests::test_outbox_deleted_user::{{closure}}
  12: <core::pin::Pin<P> as core::future::future::Future>::poll
  13: tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}::{{closure}}
  14: <core::future::poll_fn::PollFn<F> as core::future::future::Future>::poll
  15: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
  16: tokio::runtime::park::CachedParkThread::block_on
  17: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
  18: tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}
  19: tokio::runtime::context::runtime::enter_runtime
  20: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
  21: tokio::runtime::runtime::Runtime::block_on_inner
  22: tokio::runtime::runtime::Runtime::block_on
  23: lemmy_apub::http::community::tests::test_outbox_deleted_user
  24: lemmy_apub::http::community::tests::test_outbox_deleted_user::{{closure}}
  25: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

thread 'http::community::tests::test_outbox_deleted_user' (14927) panicked at /woodpecker/src/github.com/LemmyNet/lemmy/.cargo_home/git/checkouts/db-pool-42b0ece7055f5868/68d84b8/src/async/conn_pool.rs:27:9:
can call blocking only when running on the multi-threaded runtime
stack backtrace:
   0:     0x560c4c30d692 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::haa87a551a4affa55
   1:     0x560c4c321f2f - core::fmt::write::h80461e1e45e4fdd2
   2:     0x560c4c2d8021 - std::io::Write::write_fmt::hbf5cebcad70aeb70
   3:     0x560c4c2e4cb2 - std::sys::backtrace::BacktraceLock::print::hf67a46baa621998e
   4:     0x560c4c2ea81f - std::panicking::default_hook::{{closure}}::h391aa815d5e47ec8
   5:     0x560c4c2ea6b1 - std::panicking::default_hook::hd6fdcf2489bb807d
   6:     0x560c4acd330e - test::test_main_with_exit_callback::{{closure}}::h9778e872998a3bb6
   7:     0x560c4c2eaf8f - std::panicking::panic_with_hook::h185ddfb86bf14d73
   8:     0x560c4c2ead3a - std::panicking::panic_handler::{{closure}}::had89ddd01b6112c9
   9:     0x560c4c2e4de9 - std::sys::backtrace::__rust_end_short_backtrace::h5d0fc36eef7265ea
  10:     0x560c4c2cba9d - __rustc[eb8946e36839644a]::rust_begin_unwind
  11:     0x560c4c32bf10 - core::panicking::panic_fmt::h92c8e5abe71dd8d1
  12:     0x560c4bccaa98 - core::panicking::panic_display::hf5836e8b737d947e
  13:     0x560c4b58d8ad - tokio::runtime::scheduler::multi_thread::worker::block_in_place::h6380db0070847cf3
  14:     0x560c4b64d3a7 - tokio::runtime::scheduler::block_in_place::block_in_place::h700614781dd60711
  15:     0x560c4b5537a7 - tokio::task::blocking::block_in_place::h2d1fbd5d5ef9e244
  16:     0x560c4b58458b - <db_pool::async::conn_pool::ConnectionPool<B> as core::ops::drop::Drop>::drop::h3a62678511067b3f
  17:     0x560c4b575ae1 - core::ptr::drop_in_place<db_pool::async::conn_pool::ConnectionPool<db_pool::async::backend::postgres::diesel::DieselAsyncPostgresBackend<db_pool::async::backend::common::pool::diesel::deadpool::DieselDeadpool>>>::hd250b20bd1cc8bbe
  18:     0x560c4b575c57 - core::ptr::drop_in_place<db_pool::async::conn_pool::ReusableConnectionPool<db_pool::async::backend::postgres::diesel::DieselAsyncPostgresBackend<db_pool::async::backend::common::pool::diesel::deadpool::DieselDeadpool>>>::h04e8f2a9e1ed5b04
  19:     0x560c4b59b4c8 - db_pool::async::db_pool::DatabasePoolBuilder::create_database_pool::{{closure}}::{{closure}}::{{closure}}::hf9445ced20b3b34c
  20:     0x560c4b5419c9 - <core::pin::Pin<P> as core::future::future::Future>::poll::h69b19cd640a1b586
  21:     0x560c4b592fac - db_pool::async::object_pool::ObjectPool<T>::pull::{{closure}}::hbe39a7e916e3c5ce
  22:     0x560c4b59b7eb - db_pool::async::db_pool::DatabasePool<B>::pull_immutable::{{closure}}::hb31384f0a45b5b8c
  23:     0x560c4b5ffddf - lemmy_diesel_utils::connection::build_db_pool_for_tests::{{closure}}::h81de2850bf438fa0
  24:     0x560c4ababdf6 - lemmy_api_utils::context::LemmyContext::init_test_federation_config::{{closure}}::hb16dfbdbaeae420e
  25:     0x560c4ababb25 - lemmy_api_utils::context::LemmyContext::init_test_context::{{closure}}::h2dc8860aa5378828
  26:     0x560c4ab59215 - lemmy_apub::http::community::tests::test_outbox_deleted_user::{{closure}}::h9110c62800495aef
  27:     0x560c4acb1ef8 - <core::pin::Pin<P> as core::future::future::Future>::poll::h659a6b5e1fee768c
  28:     0x560c4abf04e5 - tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}::{{closure}}::hb386257240ec99a9
  29:     0x560c4ab3fd7d - <core::future::poll_fn::PollFn<F> as core::future::future::Future>::poll::h9ea5d41d84af3916
  30:     0x560c4abfd927 - tokio::runtime::park::CachedParkThread::block_on::{{closure}}::h38238f70b4019b93
  31:     0x560c4abfcff8 - tokio::runtime::park::CachedParkThread::block_on::hd7fd64ac153ab0b3
  32:     0x560c4ab75272 - tokio::runtime::context::blocking::BlockingRegionGuard::block_on::h59f870419b336364
  33:     0x560c4abf03d6 - tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}::h9ae3615210843369
  34:     0x560c4ab759a9 - tokio::runtime::context::runtime::enter_runtime::ha38466a295c8add2
  35:     0x560c4abefcc7 - tokio::runtime::scheduler::current_thread::CurrentThread::block_on::h04dfe1aa4bae3c68
  36:     0x560c4ab06d93 - tokio::runtime::runtime::Runtime::block_on_inner::h560997ccdfe0f9cb
  37:     0x560c4ab0702a - tokio::runtime::runtime::Runtime::block_on::h2bf8e3deeb4c9ff4
  38:     0x560c4abeafea - lemmy_apub::http::community::tests::test_outbox_deleted_user::haabbdeeb3bdd6bbf
  39:     0x560c4ab58ed8 - lemmy_apub::http::community::tests::test_outbox_deleted_user::{{closure}}::h15a184be2e4928c6
  40:     0x560c4ac95f66 - core::ops::function::FnOnce::call_once::hdf303cdfc2776dc8
  41:     0x560c4acd30cb - test::__rust_begin_short_backtrace::h204541656cf312c3
  42:     0x560c4ace8d25 - test::run_test::{{closure}}::h0804340aebb2c94e
  43:     0x560c4acbf524 - std::sys::backtrace::__rust_begin_short_backtrace::hd358f13a627bfe6c
  44:     0x560c4acc2e9a - core::ops::function::FnOnce::call_once{{vtable.shim}}::h7d9f867e7de29aae
  45:     0x560c4c2dffdf - std::sys::thread::unix::Thread::new::thread_start::h10345b7e8309cb92
  46:     0x7f4986277b7b - <unknown>
  47:     0x7f49862f55f0 - __clone
  48:                0x0 - <unknown>
error: test failed, to rerun pass `-p lemmy_apub --lib`

Caused by:
  process didn't exit successfully: `/woodpecker/src/github.com/LemmyNet/lemmy/target/debug/deps/lemmy_apub-0e393ec31e9fdbad` (signal: 6, SIGABRT: process abort signal)
     Running unittests src/lib.rs (target/debug/deps/lemmy_apub_activities-6e13b164f77d2ea0)

These might be responsible for later failures. Also try to use the same db pool and same init code for all tests.

By the way you can run specific tests locally with ./scripts/test.sh lemmy_db_views_post or ./scripts/test.sh lemmy_db_views_post *test_name*

@momentary-lapse
Copy link
Contributor Author

yes, i did run these locally scripts beforehand (also, ./scripts/test. sh lemmy_db_views* runs tests from multiple crates fitting the pattern), and there are two apub tests (test_markdown_rewrite_remote_links in lemmy_apub_objects and test_send_manager_processes from lemmy_apub_send) which are panicking. it seems unrelated to lemmy_db_views_post

@momentary-lapse
Copy link
Contributor Author

momentary-lapse commented Jan 2, 2026

fixed db views post tests, now only one of them fails:

thread 'test::post_listing_person_language' (169139) panicked at /home/aboitsov/.cargo/git/checkouts/db-pool-42b0ece7055f5868/68d84b8/src/async/db_pool.rs:189:30:
connection pool cleaning must succeed: Query(DatabaseError(Unknown, "migrations must be managed using lemmy_server instead of diesel CLI"))

dunno why and what's the difference with others, will check later

@Nutomic
Copy link
Member

Nutomic commented Jan 5, 2026

The markdown_links test is failing because line 141 has a second call to init_test_context(). After removing that it passes.

Other test failures are showing this second error below: can call blocking only when running on the multi-threaded runtime. Following this and changing the macro to #[tokio_shared_rt::test(shared = true, flavor = "multi_thread")] makes the test pass (at least for post_listing_no_person which I tried).

By the way are tests not executed in parallel yet? Runtime seems the same as before.

And please dont forget to remove docker-compose from test scripts so that it uses the local db like before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants