Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 6 additions & 17 deletions .github/workflows/check-code-quality.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,28 +11,17 @@ on:
- cron: '0 4 * * *'

permissions:
actions: write # Needed for skip-duplicate-jobs job
contents: read

jobs:
# Special job which automatically cancels old runs for the same branch, prevents runs for the
# same file set which has already passed, etc.
pre_job:
name: Skip Duplicate Jobs Pre Job
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281 # v5.3.0
with:
cancel_others: 'true'
github_token: ${{ github.token }}
# We don't want to cancel any redundant runs on main so we use run_id when head_ref is
# not available
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true

jobs:
build:
runs-on: ubuntu-latest
needs: pre_job
if: ${{ needs.pre_job.outputs.should_skip != 'true' || github.ref_name == 'main' }}

steps:
- uses: actions/checkout@v4
Expand Down
9 changes: 9 additions & 0 deletions .github/workflows/codeql.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,15 @@ on:
schedule:
- cron: '15 9 * * 5'

permissions:
contents: read

# We don't want to cancel any redundant runs on main so we use run_id when head_ref is
# not available
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true

jobs:
analyze:
name: Analyze
Expand Down
11 changes: 11 additions & 0 deletions .github/workflows/playwright.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,20 @@
name: Playwright E2E Tests

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

permissions:
contents: read

# We don't want to cancel any redundant runs on main so we use run_id when head_ref is
# not available
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true

jobs:
test:
timeout-minutes: 30
Expand Down
6 changes: 6 additions & 0 deletions .github/workflows/unittests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,12 @@ on: [pull_request]
permissions:
contents: read

# We don't want to cancel any redundant runs on main so we use run_id when head_ref is
# not available
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true

jobs:
UnitTest:
runs-on: ubuntu-latest
Expand Down
6 changes: 6 additions & 0 deletions .github/workflows/validate.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,12 @@ on: [pull_request]
permissions:
contents: read

# We don't want to cancel any redundant runs on main so we use run_id when head_ref is
# not available
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true

jobs:
validate:
runs-on: ubuntu-latest
Expand Down
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Singularity Data Lake Add-On for Splunk

[![Check Code Quality](https://github.com/scalyr/dataset-addon-for-splunk/actions/workflows/check-code-quality.yaml/badge.svg)](https://github.com/scalyr/dataset-addon-for-splunk/actions/workflows/check-code-quality.yaml) [![Unit tests](https://github.com/scalyr/dataset-addon-for-splunk/actions/workflows/unittests.yml/badge.svg)](https://github.com/scalyr/dataset-addon-for-splunk/actions/workflows/unittests.yml) [![UCC Gen Validation](https://github.com/scalyr/dataset-addon-for-splunk/actions/workflows/validate.yaml/badge.svg)](https://github.com/scalyr/dataset-addon-for-splunk/actions/workflows/validate.yaml) [![Playwright E2E Tests](https://github.com/scalyr/dataset-addon-for-splunk/actions/workflows/playwright.yml/badge.svg)](https://github.com/scalyr/dataset-addon-for-splunk/actions/workflows/playwright.yml)

The Singularity Data Lake Add-On for Splunk provides integration with [Singularity Data Lake](https://www.sentinelone.com/platform/xdr-ingestion/) and [DataSet](https://www.dataset.com) by [SentinelOne](https://sentinelone.com). The key functions allow two-way integration:
- SPL custom command to query directly from the Splunk UI.
- Inputs to index alerts as CIM-compliant, or any user-defined query results.
Expand Down Expand Up @@ -195,6 +198,10 @@ If Splunk events all show the same time, ensure results are returning a `timesta
This add-on was built with the [Splunk Add-on UCC framework](https://splunk.github.io/addonfactory-ucc-generator/) and uses the [Splunk Enterprise Python SDK](https://github.com/splunk/splunk-sdk-python).
Splunk is a trademark or registered trademark of Splunk Inc. in the United States and other countries.

## Development

For information on development and contributing, please see [CONTRIBUTING.md](CONTRIBUTING.md).

## Security

For information on how to report security vulnerabilities, please see [SECURITY.md](SECURITY.md).
Expand Down
9 changes: 8 additions & 1 deletion e2e/inputs.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -65,14 +65,16 @@ test('New Input - DataSet PowerQuery', async ({ page }) => {
});

test('New Input - DataSet Alerts', async ({ page }) => {
test.setTimeout(3 * 60 * 1000);

await openDialog(page, "DataSet Alerts");

console.log("Fill the form")
const queryName = ('QuErY_A_' + (Math.random() * 1e18)).slice(0, 15)
console.log("Create query: ", queryName);

await page.locator('div').filter({ hasText: /^\*?NameEnter a unique name for the data input$/ }).locator('[data-test="textbox"]').fill(queryName);
await page.locator('div').filter({ hasText: /^\*?IntervalTime interval of input in seconds\.$/ }).locator('[data-test="textbox"]').fill("60")
await page.locator('div').filter({ hasText: /^\*?IntervalTime interval of input in seconds\.$/ }).locator('[data-test="textbox"]').fill("20")
await page.locator('form div').filter({ hasText: /^\*?Start TimeRelative time to query back. Use short form relative time, e.g.: 24h/ }).locator('[data-test="textbox"]').fill("60m")

await page.getByLabel("Select a value").click();
Expand All @@ -84,6 +86,11 @@ test('New Input - DataSet Alerts', async ({ page }) => {

await checkRowExists(page, queryName);

// Alerts are evaluated + alert state is reported every 1 minute so we need to wait at least 2 minutes to avoid
// flaky tests. 3 minutes would be even better
console.log("Waiting ~2 minutes since alerts are evaluated and state is reported only every 1 minute")
await page.waitForTimeout(2 * 65 * 1000);

await goToSplunkSearch(page);

await searchSplunk(page, `source="dataset_alerts://${queryName}"`)
Expand Down