Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix LP tests filters check and add to smoke tests in CI #11649

Merged
merged 39 commits into from
Jan 23, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
8669b49
fix hasFilters, fix cancelling log emission in case of errors, add so…
Tofel Dec 21, 2023
5cd31c7
fix replay test
Tofel Dec 21, 2023
2bd609c
fix lints
Tofel Dec 21, 2023
3c7ec93
shorten chaos log poller test
Tofel Dec 21, 2023
4f029dc
remove debug logs
Tofel Dec 21, 2023
74248fe
disable backup poller. DO NOT MERGE THIS
Tofel Dec 22, 2023
bc8afe6
make it configurable which container is paused: cl node or postgres
Tofel Dec 22, 2023
2344b9a
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 8, 2024
b7ed75c
try Domino's way of stopping log poller
Tofel Jan 8, 2024
e509bbe
fix smoke test job name
Tofel Jan 8, 2024
9f09b95
remove unnecessary check
Tofel Jan 8, 2024
428505f
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 9, 2024
eefb8ba
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 10, 2024
bcb5b39
Merge branch 'lp_tests_final_fix_and_ci' of github.com:smartcontractk…
Tofel Jan 10, 2024
1d6355e
try executing postgress pausing lp test, more debug info about missin…
Tofel Jan 10, 2024
095b18b
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 10, 2024
1c0469f
fix lints
Tofel Jan 10, 2024
3986381
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 10, 2024
84c2268
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 11, 2024
c9a4f11
revert disabling of backpolling, simplify some code, less logs
Tofel Jan 11, 2024
5017d47
make optional parameters not required in lp on demand test
Tofel Jan 11, 2024
c6561fe
run lp smoke tests in a separate job
Tofel Jan 12, 2024
c39a46a
prepare log poller json test list
Tofel Jan 12, 2024
28201c6
add missing comma, revert some comments
Tofel Jan 15, 2024
470be1d
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 15, 2024
a79a60d
fix filename for lp matrix job
Tofel Jan 15, 2024
fd53328
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 19, 2024
d8e50c4
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 22, 2024
bbcd6b6
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 22, 2024
cf37f31
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 22, 2024
623033a
move log poller scenarios to test file, remove unused load test
Tofel Jan 22, 2024
2b41ffc
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 23, 2024
af7db75
reduce code duplication in lp tests
Tofel Jan 23, 2024
7de3fd7
fix lp tests, fix smoke workflow
Tofel Jan 23, 2024
34243da
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 23, 2024
62ee2d1
Merge branch 'develop' into lp_tests_final_fix_and_ci
Tofel Jan 23, 2024
451d457
remove commented out sections, remove cosmossdk.io/errors
Tofel Jan 23, 2024
f52c011
Merge branch 'lp_tests_final_fix_and_ci' of github.com:smartcontractk…
Tofel Jan 23, 2024
157e683
removed trailing spaces, added comments to functions
Tofel Jan 23, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 19 additions & 1 deletion .github/workflows/integration-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -328,7 +328,7 @@ jobs:
run: -run TestOCRJobReplacement
file: ocr
pyroscope_env: ci-smoke-ocr-evm-simulated
Tofel marked this conversation as resolved.
Show resolved Hide resolved
- name: ocr2
- name: ocr2-replacement
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't this break the run? I thought names were used to help determine files?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, smoke tests pass, no? ;-)

nodes: 1
os: ubuntu-latest
run: -run TestOCRv2JobReplacement
Expand Down Expand Up @@ -369,6 +369,24 @@ jobs:
nodes: 1
os: ubuntu-latest
pyroscope_env: ci-smoke-forwarder-ocr-evm-simulated
- name: log-poller-finality-tag
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two questions:

  1. Is it possible to run these tests in parallel?
  2. If the answer to the above is no then do we think we will need to add more log poller tests? We have a setup using json files to handle test files with multiple docker based tests that helps us notice when a new test is added but not added to CI with that. With only 3 tests it might be overkill to move to that but if we are going to be adding any more we might want to go that route.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. in parallel as in a separate job that runs parallel to this one? I think it should be, but unless we decide we want to run all 6 smoke tests not just 3 of them, then imho it's not needed.
  2. @reductionista do you think it makes sense to run 3 tests we have (regular one, replay, chaos) for both fixed and finality tag versions or it's enough if we run any of these 3 tests with fixed depth and rest with finality tag?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see much point in running any of these tests in CI yet until we add a way to disable backup logpoller in config. I guess we can run them, but we may get a lot of false positives.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think once we are ready to run all of them, we should probably run all 6... because there could be different bugs with fixed depth and finality tag for any of those situations

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the end I did what you suggested, @tateexon
image

@reductionista I think it's better to run them even if they are not testing LP in isolation, because backup poller is covering whatever issues we might have, we might still catch something.

nodes: 1
os: ubuntu-latest
run: -run TestLogPollerFewFiltersFinalityTag
file: log_poller
pyroscope_env: ""
- name: log-poller-chaos-fixed-depth
nodes: 1
os: ubuntu-latest
run: -run TestLogPollerWithChaosFixedDepth
file: log_poller
pyroscope_env: ""
- name: log-poller-replay-finality-tag
nodes: 1
os: ubuntu-latest
run: -run TestLogPollerReplayFinalityTag
file: log_poller
pyroscope_env: ""
runs-on: ${{ matrix.product.os }}
name: ETH Smoke Tests ${{ matrix.product.name }}${{ matrix.product.tag_suffix }}
steps:
Expand Down
22 changes: 11 additions & 11 deletions integration-tests/smoke/log_poller_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ func TestLogPollerFewFiltersFinalityTag(t *testing.T) {

// consistency test with no network disruptions with approximate emission of 1000-1100 logs per second for ~110-120 seconds
// 900 filters are registered
func TestLogManyFiltersPollerFixedDepth(t *testing.T) {
func TestLogPollerManyFiltersFixedDepth(t *testing.T) {
cfg := logpoller.Config{
General: &logpoller.General{
Generator: logpoller.GeneratorType_Looped,
Expand Down Expand Up @@ -99,7 +99,7 @@ func TestLogManyFiltersPollerFixedDepth(t *testing.T) {
logpoller.ExecuteBasicLogPollerTest(t, &cfg)
}

func TestLogManyFiltersPollerFinalityTag(t *testing.T) {
func TestLogPollerManyFiltersFinalityTag(t *testing.T) {
cfg := logpoller.Config{
General: &logpoller.General{
Generator: logpoller.GeneratorType_Looped,
Expand Down Expand Up @@ -141,15 +141,15 @@ func TestLogPollerWithChaosFixedDepth(t *testing.T) {
},
LoopedConfig: &logpoller.LoopedConfig{
ContractConfig: logpoller.ContractConfig{
ExecutionCount: 100,
ExecutionCount: 50,
},
FuzzConfig: logpoller.FuzzConfig{
MinEmitWaitTimeMs: 200,
MaxEmitWaitTimeMs: 500,
MinEmitWaitTimeMs: 100,
MaxEmitWaitTimeMs: 300,
},
},
ChaosConfig: &logpoller.ChaosConfig{
ExperimentCount: 10,
ExperimentCount: 5,
},
}

Expand All @@ -173,15 +173,15 @@ func TestLogPollerWithChaosFinalityTag(t *testing.T) {
},
LoopedConfig: &logpoller.LoopedConfig{
ContractConfig: logpoller.ContractConfig{
ExecutionCount: 100,
ExecutionCount: 50,
},
FuzzConfig: logpoller.FuzzConfig{
MinEmitWaitTimeMs: 200,
MaxEmitWaitTimeMs: 500,
MinEmitWaitTimeMs: 100,
MaxEmitWaitTimeMs: 300,
},
},
ChaosConfig: &logpoller.ChaosConfig{
ExperimentCount: 10,
ExperimentCount: 5,
},
}

Expand Down Expand Up @@ -236,7 +236,7 @@ func TestLogPollerReplayFinalityTag(t *testing.T) {
Generator: logpoller.GeneratorType_Looped,
Contracts: 2,
EventsPerTx: 4,
UseFinalityTag: false,
UseFinalityTag: true,
},
LoopedConfig: &logpoller.LoopedConfig{
ContractConfig: logpoller.ContractConfig{
Expand Down
183 changes: 138 additions & 45 deletions integration-tests/universal/log_poller/helpers.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,13 @@ import (
"testing"
"time"

"cosmossdk.io/errors"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean to use the standard errors instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

already removed :-)

Tofel marked this conversation as resolved.
Show resolved Hide resolved
geth "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/common"
geth_types "github.com/ethereum/go-ethereum/core/types"
"github.com/jmoiron/sqlx"
"github.com/onsi/gomega"
"github.com/rs/zerolog"
"github.com/scylladb/go-reflectx"
"github.com/stretchr/testify/require"
Expand All @@ -28,6 +30,7 @@ import (
"github.com/smartcontractkit/chainlink-testing-framework/logging"
"github.com/smartcontractkit/chainlink-testing-framework/networks"
"github.com/smartcontractkit/chainlink-testing-framework/utils/ptr"
"github.com/smartcontractkit/chainlink-testing-framework/utils/testcontext"
evmcfg "github.com/smartcontractkit/chainlink/v2/core/chains/evm/config/toml"
"github.com/smartcontractkit/chainlink/v2/core/chains/evm/logpoller"
cltypes "github.com/smartcontractkit/chainlink/v2/core/chains/evm/types"
Expand Down Expand Up @@ -227,14 +230,14 @@ func getStringSlice(length int) []string {
var emitEvents = func(ctx context.Context, l zerolog.Logger, logEmitter *contracts.LogEmitter, cfg *Config, wg *sync.WaitGroup, results chan LogEmitterChannel) {
address := (*logEmitter).Address().String()
localCounter := 0
select {
case <-ctx.Done():
l.Warn().Str("Emitter address", address).Msg("Context cancelled, not emitting events")
return
default:
defer wg.Done()
for i := 0; i < cfg.LoopedConfig.ExecutionCount; i++ {
for _, event := range cfg.General.EventsToEmit {
defer wg.Done()
for i := 0; i < cfg.LoopedConfig.ExecutionCount; i++ {
for _, event := range cfg.General.EventsToEmit {
select {
case <-ctx.Done():
l.Warn().Str("Emitter address", address).Msg("Context cancelled, not emitting events")
return
default:
l.Debug().Str("Emitter address", address).Str("Event type", event.Name).Str("index", fmt.Sprintf("%d/%d", (i+1), cfg.LoopedConfig.ExecutionCount)).Msg("Emitting log from emitter")
var err error
switch event.Name {
Expand All @@ -244,14 +247,15 @@ var emitEvents = func(ctx context.Context, l zerolog.Logger, logEmitter *contrac
_, err = (*logEmitter).EmitLogIntsIndexed(getIntSlice(cfg.General.EventsPerTx))
case "Log3":
_, err = (*logEmitter).EmitLogStrings(getStringSlice(cfg.General.EventsPerTx))
case "Log4":
_, err = (*logEmitter).EmitLogIntMultiIndexed(1, 1, cfg.General.EventsPerTx)
default:
err = fmt.Errorf("unknown event name: %s", event.Name)
}

if err != nil {
results <- LogEmitterChannel{
logsEmitted: 0,
err: err,
err: err,
}
return
}
Expand All @@ -264,35 +268,24 @@ var emitEvents = func(ctx context.Context, l zerolog.Logger, logEmitter *contrac
l.Info().Str("Emitter address", address).Str("Index", fmt.Sprintf("%d/%d", i+1, cfg.LoopedConfig.ExecutionCount)).Msg("Emitted all three events")
}
}

l.Info().Str("Emitter address", address).Int("Total logs emitted", localCounter).Msg("Finished emitting events")

results <- LogEmitterChannel{
logsEmitted: localCounter,
err: nil,
}
}
}

var chainHasFinalisedEndBlock = func(l zerolog.Logger, evmClient blockchain.EVMClient, endBlock int64) (bool, error) {
effectiveEndBlock := endBlock + 1
lastFinalisedBlockHeader, err := evmClient.GetLatestFinalizedBlockHeader(context.Background())
if err != nil {
return false, err
}

l.Info().Int64("Last finalised block header", lastFinalisedBlockHeader.Number.Int64()).Int64("End block", effectiveEndBlock).Int64("Blocks left till end block", effectiveEndBlock-lastFinalisedBlockHeader.Number.Int64()).Msg("Waiting for the finalized block to move beyond end block")
l.Info().Str("Emitter address", address).Int("Total logs emitted", localCounter).Msg("Finished emitting events")

return lastFinalisedBlockHeader.Number.Int64() > effectiveEndBlock, nil
results <- LogEmitterChannel{
logsEmitted: localCounter,
err: nil,
}
}

var logPollerHasFinalisedEndBlock = func(endBlock int64, chainID *big.Int, l zerolog.Logger, coreLogger core_logger.SugaredLogger, nodes *test_env.ClCluster) (bool, error) {
wg := &sync.WaitGroup{}

type boolQueryResult struct {
nodeName string
hasFinalised bool
err error
nodeName string
hasFinalised bool
finalizedBlock int64
err error
}

endBlockCh := make(chan boolQueryResult, len(nodes.Nodes)-1)
Expand Down Expand Up @@ -328,9 +321,10 @@ var logPollerHasFinalisedEndBlock = func(endBlock int64, chainID *big.Int, l zer
}

r <- boolQueryResult{
nodeName: clNode.ContainerName,
hasFinalised: latestBlock.FinalizedBlockNumber > endBlock,
err: nil,
nodeName: clNode.ContainerName,
finalizedBlock: latestBlock.FinalizedBlockNumber,
hasFinalised: latestBlock.FinalizedBlockNumber > endBlock,
err: nil,
}

}
Expand All @@ -353,7 +347,7 @@ var logPollerHasFinalisedEndBlock = func(endBlock int64, chainID *big.Int, l zer
if r.hasFinalised {
l.Info().Str("Node name", r.nodeName).Msg("CL node has finalised end block")
} else {
l.Warn().Str("Node name", r.nodeName).Msg("CL node has not finalised end block yet")
l.Warn().Int64("Has", r.finalizedBlock).Int64("Want", endBlock).Str("Node name", r.nodeName).Msg("CL node has not finalised end block yet")
}

if len(foundMap) == len(nodes.Nodes)-1 {
Expand Down Expand Up @@ -798,27 +792,32 @@ func runLoopedGenerator(t *testing.T, cfg *Config, logEmitters []*contracts.LogE
aggrChan := make(chan int, len(logEmitters))

go func() {
for emitter := range emitterCh {
if emitter.err != nil {
emitErr = emitter.err
cancelFn()
for {
select {
case <-ctx.Done():
return
case emitter := <-emitterCh:
if emitter.err != nil {
emitErr = emitter.err
cancelFn()
return
}
aggrChan <- emitter.logsEmitted
}
aggrChan <- emitter.logsEmitted
}
}()

wg.Wait()
close(emitterCh)

for i := 0; i < len(logEmitters); i++ {
total += <-aggrChan
}

if emitErr != nil {
return 0, emitErr
}

for i := 0; i < len(logEmitters); i++ {
total += <-aggrChan
}

return int(total), nil
}

Expand Down Expand Up @@ -1043,9 +1042,9 @@ func setupLogPollerTestDocker(
WithConsensusType(ctf_test_env.ConsensusType_PoS).
WithConsensusLayer(ctf_test_env.ConsensusLayer_Prysm).
WithExecutionLayer(ctf_test_env.ExecutionLayer_Geth).
WithWaitingForFinalization().
// WithWaitingForFinalization().
WithEthereumChainConfig(ctf_test_env.EthereumChainConfig{
SecondsPerSlot: 8,
SecondsPerSlot: 4,
SlotsPerEpoch: 2,
}).
Build()
Expand Down Expand Up @@ -1118,3 +1117,97 @@ func setupLogPollerTestDocker(

return env.EVMClient, nodeClients, env.ContractDeployer, linkToken, registry, registrar, env
}

func uploadLogEmitterContractsAndWaitForFinalisation(l zerolog.Logger, t *testing.T, testEnv *test_env.CLClusterTestEnv, cfg *Config) []*contracts.LogEmitter {
logEmitters := make([]*contracts.LogEmitter, 0)
for i := 0; i < cfg.General.Contracts; i++ {
logEmitter, err := testEnv.ContractDeployer.DeployLogEmitterContract()
logEmitters = append(logEmitters, &logEmitter)
require.NoError(t, err, "Error deploying log emitter contract")
l.Info().Str("Contract address", logEmitter.Address().Hex()).Msg("Log emitter contract deployed")
time.Sleep(200 * time.Millisecond)
}
afterUploadBlock, err := testEnv.EVMClient.LatestBlockNumber(testcontext.Get(t))
require.NoError(t, err, "Error getting latest block number")

gom := gomega.NewGomegaWithT(t)
gom.Eventually(func(g gomega.Gomega) {
targetBlockNumber := int64(afterUploadBlock + 1)
finalized, err := testEnv.EVMClient.GetLatestFinalizedBlockHeader(testcontext.Get(t))
if err != nil {
l.Warn().Err(err).Msg("Error checking if contract were uploaded. Retrying...")
return
}
finalizedBlockNumber := finalized.Number.Int64()

if finalizedBlockNumber < targetBlockNumber {
l.Debug().Int64("Finalized block", finalized.Number.Int64()).Int64("After upload block", int64(afterUploadBlock+1)).Msg("Waiting for contract upload to finalise")
}

g.Expect(finalizedBlockNumber >= targetBlockNumber).To(gomega.BeTrue(), "Contract upload did not finalize in time")
}, "2m", "10s").Should(gomega.Succeed())

return logEmitters
}

func assertUpkeepIdsUniqueness(upkeepIDs []*big.Int) error {
upKeepIdSeen := make(map[int64]bool)
for _, upkeepID := range upkeepIDs {
if _, ok := upKeepIdSeen[upkeepID.Int64()]; ok {
return fmt.Errorf("Duplicate upkeep ID %d", upkeepID.Int64())
}
upKeepIdSeen[upkeepID.Int64()] = true
}

return nil
}

func assertContractAddressUniquneness(logEmitters []*contracts.LogEmitter) error {
contractAddressSeen := make(map[string]bool)
for _, logEmitter := range logEmitters {
address := (*logEmitter).Address().String()
if _, ok := contractAddressSeen[address]; ok {
return fmt.Errorf("Duplicate contract address %s", address)
}
contractAddressSeen[address] = true
}

return nil
}

func registerFiltersAndAssertUniquness(l zerolog.Logger, registry contracts.KeeperRegistry, upkeepIDs []*big.Int, logEmitters []*contracts.LogEmitter, cfg *Config, upKeepsNeeded int) error {
uniqueFilters := make(map[string]bool)

upkeepIdIndex := 0
for i := 0; i < len(logEmitters); i++ {
for j := 0; j < len(cfg.General.EventsToEmit); j++ {
emitterAddress := (*logEmitters[i]).Address()
topicId := cfg.General.EventsToEmit[j].ID

upkeepID := upkeepIDs[upkeepIdIndex]
l.Debug().Int("Upkeep id", int(upkeepID.Int64())).Str("Emitter address", emitterAddress.String()).Str("Topic", topicId.Hex()).Msg("Registering log trigger for log emitter")
err := registerSingleTopicFilter(registry, upkeepID, emitterAddress, topicId)
randomWait(150, 300)
if err != nil {
return errors.Wrapf(err, "Error registering log trigger for log emitter %s", emitterAddress.String())
}

if i%10 == 0 {
l.Info().Msgf("Registered log trigger for topic %d for log emitter %d/%d", j, i, len(logEmitters))
}

key := fmt.Sprintf("%s-%s", emitterAddress.String(), topicId.Hex())
if _, ok := uniqueFilters[key]; ok {
return fmt.Errorf("Duplicate filter %s", key)
}
uniqueFilters[key] = true
upkeepIdIndex++
}
}

if upKeepsNeeded != len(uniqueFilters) {
return fmt.Errorf("Number of unique filters should be equal to number of upkeeps. Expected %d. Got %d", upKeepsNeeded, len(uniqueFilters))
}

return nil
}
Loading
Loading