The Cribl TypeScript SDK for the control plane provides operational control over Cribl resources and helps streamline the process of integrating with Cribl.
In addition to the usage examples in this repository, you can adapt the code examples for common use cases in the Cribl documentation to use TypeScript instead of Python.
Complementary API reference documentation is available at https://docs.cribl.io/cribl-as-code/api-reference. Product documentation is available at https://docs.cribl.io.
Important
Preview Feature The Cribl SDKs are Preview features that are still being developed. We do not recommend using them in a production environment, because the features might not be fully tested or optimized for performance, and related documentation could be incomplete.
Please continue to submit feedback through normal Cribl support channels, but assistance might be limited while the features remain in Preview.
The SDK can be installed with either npm, pnpm, bun or yarn package managers.
npm add cribl-control-planepnpm add cribl-control-planebun add cribl-control-planeyarn add cribl-control-planeNote
This package is published with CommonJS and ES Modules (ESM) support.
For supported JavaScript runtimes, please consult RUNTIMES.md.
import { CriblControlPlane } from "cribl-control-plane";
const criblControlPlane = new CriblControlPlane({
serverURL: "https://api.example.com",
security: {
bearerAuth: process.env["CRIBLCONTROLPLANE_BEARER_AUTH"] ?? "",
},
});
async function run() {
const result = await criblControlPlane.lakeDatasets.create({
lakeId: "<id>",
criblLakeDataset: {
acceleratedFields: [
"<value 1>",
"<value 2>",
],
bucketName: "<value>",
cacheConnection: {
acceleratedFields: [
"<value 1>",
"<value 2>",
],
backfillStatus: "pending",
cacheRef: "<value>",
createdAt: 7795.06,
lakehouseConnectionType: "cache",
migrationQueryId: "<id>",
retentionInDays: 1466.58,
},
deletionStartedAt: 8310.58,
description:
"pleased toothbrush long brush smooth swiftly rightfully phooey chapel",
format: "ddss",
httpDAUsed: true,
id: "<id>",
metrics: {
currentSizeBytes: 6170.04,
metricsDate: "<value>",
},
retentionPeriodInDays: 456.37,
searchConfig: {
datatypes: [
"<value 1>",
],
metadata: {
earliest: "<value>",
enableAcceleration: true,
fieldList: [
"<value 1>",
"<value 2>",
],
latestRunInfo: {
earliestScannedTime: 4334.7,
finishedAt: 6811.22,
latestScannedTime: 5303.3,
objectCount: 9489.04,
},
scanMode: "detailed",
},
},
storageLocationId: "<id>",
viewName: "<value>",
},
});
console.log(result);
}
run();Except for the health.get and auth.tokens.get methods, all Cribl SDK requests require you to authenticate with a Bearer token. You must include a valid Bearer token in the configuration when initializing your SDK client. The Bearer token verifies your identity and ensures secure access to the requested resources. The SDK automatically manages the Authorization header for subsequent requests once properly authenticated.
For information about Bearer token expiration, see Token Management in the Cribl as Code documentation.
Authentication happens once during SDK initialization. After you initialize the SDK client with authentication as shown in the authentication examples, the SDK automatically handles authentication for all subsequent API calls. You do not need to include authentication parameters in individual API requests. The SDK Example Usage section shows how to initialize the SDK and make API calls, but if you've properly initialized your client as shown in the authentication examples, you only need to make the API method calls themselves without re-initializing.
This SDK supports the following security schemes globally:
| Name | Type | Scheme | Environment Variable |
|---|---|---|---|
bearerAuth |
http | HTTP Bearer | CRIBLCONTROLPLANE_BEARER_AUTH |
clientOauth |
oauth2 | OAuth2 token | CRIBLCONTROLPLANE_CLIENT_OAUTH |
To configure authentication on Cribl.Cloud and in hybrid deployments, use the clientOauth security scheme. The SDK uses the OAuth credentials that you provide to obtain a Bearer token and refresh the token within its expiration window using the standard OAuth2 flow.
In on-prem deployments, use the bearerAuth security scheme. The SDK uses the username/password credentials that you provide to obtain a Bearer token. Automatically refreshing the Bearer token within its expiration window requires a callback function as shown in the On-Prem Authentication Example.
Set the security scheme through the security optional parameter when initializing the SDK client instance. The SDK uses the selected scheme by default to authenticate with the API for all operations that support it.
The Cribl.Cloud and Hybrid Authentication Example demonstrates how to configure authentication on Cribl.Cloud and in hybrid deployments. To obtain the Client ID and Client Secret you'll need to initialize using the clientOauth security schema, follow the instructions for creating an API Credential in the Cribl as Code documentation.
The On-Prem Authentication Example demonstrates how to configure authentication in on-prem deployments using your username and password.
Available methods
- get - Log in and fetch an authentication token
- list - List all Destinations
- create - Create a Destination
- get - Get a Destination
- update - Update a Destination
- delete - Delete a Destination
- clear - Clear the persistent queue for a Destination
- get - Get information about the latest job to clear the persistent queue for a Destination
- list - List all Worker Groups or Edge Fleets for the specified Cribl product
- create - Create a Worker Group or Edge Fleet for the specified Cribl product
- get - Get a Worker Group or Edge Fleet
- update - Update a Worker Group or Edge Fleet
- delete - Delete a Worker Group or Edge Fleet
- deploy - Deploy commits to a Worker Group or Edge Fleet
- get - Get the Access Control List for a Worker Group or Edge Fleet
- get - Get the Access Control List for teams with permissions on a Worker Group or Edge Fleet for the specified Cribl product
- get - Get the configuration version for a Worker Group or Edge Fleet
- get - Retrieve health status of the server
- create - Create a Lake Dataset
- list - List all Lake Datasets
- delete - Delete a Lake Dataset
- get - Get a Lake Dataset
- update - Update a Lake Dataset
- get - Get a summary of the Distributed deployment
- install - Install a Pack
- list - List all Packs
- upload - Upload a Pack file
- delete - Uninstall a Pack
- get - Get a Pack
- update - Upgrade a Pack
- list - List all Pipelines
- create - Create a Pipeline
- get - Get a Pipeline
- update - Update a Pipeline
- delete - Delete a Pipeline
- list - List all Routes
- get - Get a Routing table
- update - Update a Route
- append - Add a Route to the end of the Routing table
- list - List all Sources
- create - Create a Source
- get - Get a Source
- update - Update a Source
- delete - Delete a Source
- create - Add an HEC token and optional metadata to a Splunk HEC Source
- update - Update metadata for an HEC token for a Splunk HEC Source
- list - List all branches in the Git repository used for Cribl configuration
- get - Get the name of the Git branch that the Cribl configuration is checked out to
- create - Create a new commit for pending changes to the Cribl configuration
- diff - Get the diff for a commit
- list - List the commit history
- push - Push local commits to the remote repository
- revert - Revert a commit in the local repository
- get - Get the diff and log message for a commit
- undo - Discard uncommitted (staged) changes
- count - Get a count of files that changed since a commit
- list - Get the names and statuses of files that changed since a commit
- get - Get the configuration and status for the Git integration
- get - Get the status of the current working tree
All the methods listed above are available as standalone functions. These functions are ideal for use in applications running in the browser, serverless runtimes or other environments where application bundle size is a primary concern. When using a bundler to build your application, all unused functionality will be either excluded from the final bundle or tree-shaken away.
To read more about standalone functions, check FUNCTIONS.md.
Available standalone functions
authTokensGet- Log in and fetch an authentication tokendestinationsCreate- Create a DestinationdestinationsDelete- Delete a DestinationdestinationsGet- Get a DestinationdestinationsList- List all DestinationsdestinationsPqClear- Clear the persistent queue for a DestinationdestinationsPqGet- Get information about the latest job to clear the persistent queue for a DestinationdestinationsSamplesCreate- Send sample event data to a DestinationdestinationsSamplesGet- Get sample event data for a DestinationdestinationsUpdate- Update a DestinationgroupsAclGet- Get the Access Control List for a Worker Group or Edge FleetgroupsAclTeamsGet- Get the Access Control List for teams with permissions on a Worker Group or Edge Fleet for the specified Cribl productgroupsConfigsVersionsGet- Get the configuration version for a Worker Group or Edge FleetgroupsCreate- Create a Worker Group or Edge Fleet for the specified Cribl productgroupsDelete- Delete a Worker Group or Edge FleetgroupsDeploy- Deploy commits to a Worker Group or Edge FleetgroupsGet- Get a Worker Group or Edge FleetgroupsList- List all Worker Groups or Edge Fleets for the specified Cribl productgroupsUpdate- Update a Worker Group or Edge FleethealthGet- Retrieve health status of the serverlakeDatasetsCreate- Create a Lake DatasetlakeDatasetsDelete- Delete a Lake DatasetlakeDatasetsGet- Get a Lake DatasetlakeDatasetsList- List all Lake DatasetslakeDatasetsUpdate- Update a Lake DatasetnodesCount- Get a count of Worker and Edge NodesnodesList- Get detailed metadata for Worker and Edge NodesnodesSummariesGet- Get a summary of the Distributed deploymentpacksDelete- Uninstall a PackpacksGet- Get a PackpacksInstall- Install a PackpacksList- List all PackspacksUpdate- Upgrade a PackpacksUpload- Upload a Pack filepipelinesCreate- Create a PipelinepipelinesDelete- Delete a PipelinepipelinesGet- Get a PipelinepipelinesList- List all PipelinespipelinesUpdate- Update a PipelineroutesAppend- Add a Route to the end of the Routing tableroutesGet- Get a Routing tableroutesList- List all RoutesroutesUpdate- Update a RoutesourcesCreate- Create a SourcesourcesDelete- Delete a SourcesourcesGet- Get a SourcesourcesHecTokensCreate- Add an HEC token and optional metadata to a Splunk HEC SourcesourcesHecTokensUpdate- Update metadata for an HEC token for a Splunk HEC SourcesourcesList- List all SourcessourcesUpdate- Update a SourceversionsBranchesGet- Get the name of the Git branch that the Cribl configuration is checked out toversionsBranchesList- List all branches in the Git repository used for Cribl configurationversionsCommitsCreate- Create a new commit for pending changes to the Cribl configurationversionsCommitsDiff- Get the diff for a commitversionsCommitsFilesCount- Get a count of files that changed since a commitversionsCommitsFilesList- Get the names and statuses of files that changed since a commitversionsCommitsGet- Get the diff and log message for a commitversionsCommitsList- List the commit historyversionsCommitsPush- Push local commits to the remote repositoryversionsCommitsRevert- Revert a commit in the local repositoryversionsCommitsUndo- Discard uncommitted (staged) changesversionsConfigsGet- Get the configuration and status for the Git integrationversionsStatusesGet- Get the status of the current working tree
Certain SDK methods accept files as part of a multi-part request. It is possible and typically recommended to upload files as a stream rather than reading the entire contents into memory. This avoids excessive memory consumption and potentially crashing with out-of-memory errors when working with very large files. The following example demonstrates how to attach a file stream to a request.
Tip
Depending on your JavaScript runtime, there are convenient utilities that return a handle to a file without reading the entire contents into memory:
- Node.js v20+: Since v20, Node.js comes with a native
openAsBlobfunction innode:fs. - Bun: The native
Bun.filefunction produces a file handle that can be used for streaming file uploads. - Browsers: All supported browsers return an instance to a
Filewhen reading the value from an<input type="file">element. - Node.js v18: A file stream can be created using the
fileFromhelper fromfetch-blob/from.js.
import { CriblControlPlane } from "cribl-control-plane";
import { openAsBlob } from "node:fs";
const criblControlPlane = new CriblControlPlane({
serverURL: "https://api.example.com",
security: {
bearerAuth: process.env["CRIBLCONTROLPLANE_BEARER_AUTH"] ?? "",
},
});
async function run() {
const result = await criblControlPlane.packs.upload({
filename: "example.file",
requestBody: await openAsBlob("example.file"),
});
console.log(result);
}
run();Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a retryConfig object to the call:
import { CriblControlPlane } from "cribl-control-plane";
const criblControlPlane = new CriblControlPlane({
serverURL: "https://api.example.com",
security: {
bearerAuth: process.env["CRIBLCONTROLPLANE_BEARER_AUTH"] ?? "",
},
});
async function run() {
const result = await criblControlPlane.lakeDatasets.create({
lakeId: "<id>",
criblLakeDataset: {
acceleratedFields: [
"<value 1>",
"<value 2>",
],
bucketName: "<value>",
cacheConnection: {
acceleratedFields: [
"<value 1>",
"<value 2>",
],
backfillStatus: "pending",
cacheRef: "<value>",
createdAt: 7795.06,
lakehouseConnectionType: "cache",
migrationQueryId: "<id>",
retentionInDays: 1466.58,
},
deletionStartedAt: 8310.58,
description:
"pleased toothbrush long brush smooth swiftly rightfully phooey chapel",
format: "ddss",
httpDAUsed: true,
id: "<id>",
metrics: {
currentSizeBytes: 6170.04,
metricsDate: "<value>",
},
retentionPeriodInDays: 456.37,
searchConfig: {
datatypes: [
"<value 1>",
],
metadata: {
earliest: "<value>",
enableAcceleration: true,
fieldList: [
"<value 1>",
"<value 2>",
],
latestRunInfo: {
earliestScannedTime: 4334.7,
finishedAt: 6811.22,
latestScannedTime: 5303.3,
objectCount: 9489.04,
},
scanMode: "detailed",
},
},
storageLocationId: "<id>",
viewName: "<value>",
},
}, {
retries: {
strategy: "backoff",
backoff: {
initialInterval: 1,
maxInterval: 50,
exponent: 1.1,
maxElapsedTime: 100,
},
retryConnectionErrors: false,
},
});
console.log(result);
}
run();If you'd like to override the default retry strategy for all operations that support retries, you can provide a retryConfig at SDK initialization:
import { CriblControlPlane } from "cribl-control-plane";
const criblControlPlane = new CriblControlPlane({
serverURL: "https://api.example.com",
retryConfig: {
strategy: "backoff",
backoff: {
initialInterval: 1,
maxInterval: 50,
exponent: 1.1,
maxElapsedTime: 100,
},
retryConnectionErrors: false,
},
security: {
bearerAuth: process.env["CRIBLCONTROLPLANE_BEARER_AUTH"] ?? "",
},
});
async function run() {
const result = await criblControlPlane.lakeDatasets.create({
lakeId: "<id>",
criblLakeDataset: {
acceleratedFields: [
"<value 1>",
"<value 2>",
],
bucketName: "<value>",
cacheConnection: {
acceleratedFields: [
"<value 1>",
"<value 2>",
],
backfillStatus: "pending",
cacheRef: "<value>",
createdAt: 7795.06,
lakehouseConnectionType: "cache",
migrationQueryId: "<id>",
retentionInDays: 1466.58,
},
deletionStartedAt: 8310.58,
description:
"pleased toothbrush long brush smooth swiftly rightfully phooey chapel",
format: "ddss",
httpDAUsed: true,
id: "<id>",
metrics: {
currentSizeBytes: 6170.04,
metricsDate: "<value>",
},
retentionPeriodInDays: 456.37,
searchConfig: {
datatypes: [
"<value 1>",
],
metadata: {
earliest: "<value>",
enableAcceleration: true,
fieldList: [
"<value 1>",
"<value 2>",
],
latestRunInfo: {
earliestScannedTime: 4334.7,
finishedAt: 6811.22,
latestScannedTime: 5303.3,
objectCount: 9489.04,
},
scanMode: "detailed",
},
},
storageLocationId: "<id>",
viewName: "<value>",
},
});
console.log(result);
}
run();CriblControlPlaneError is the base class for all HTTP error responses. It has the following properties:
| Property | Type | Description |
|---|---|---|
error.message |
string |
Error message |
error.statusCode |
number |
HTTP response status code eg 404 |
error.headers |
Headers |
HTTP response headers |
error.body |
string |
HTTP body. Can be empty string if no body is returned. |
error.rawResponse |
Response |
Raw HTTP response |
error.data$ |
Optional. Some errors may contain structured data. See Error Classes. |
import { CriblControlPlane } from "cribl-control-plane";
import * as errors from "cribl-control-plane/models/errors";
const criblControlPlane = new CriblControlPlane({
serverURL: "https://api.example.com",
security: {
bearerAuth: process.env["CRIBLCONTROLPLANE_BEARER_AUTH"] ?? "",
},
});
async function run() {
try {
const result = await criblControlPlane.lakeDatasets.create({
lakeId: "<id>",
criblLakeDataset: {
acceleratedFields: [
"<value 1>",
"<value 2>",
],
bucketName: "<value>",
cacheConnection: {
acceleratedFields: [
"<value 1>",
"<value 2>",
],
backfillStatus: "pending",
cacheRef: "<value>",
createdAt: 7795.06,
lakehouseConnectionType: "cache",
migrationQueryId: "<id>",
retentionInDays: 1466.58,
},
deletionStartedAt: 8310.58,
description:
"pleased toothbrush long brush smooth swiftly rightfully phooey chapel",
format: "ddss",
httpDAUsed: true,
id: "<id>",
metrics: {
currentSizeBytes: 6170.04,
metricsDate: "<value>",
},
retentionPeriodInDays: 456.37,
searchConfig: {
datatypes: [
"<value 1>",
],
metadata: {
earliest: "<value>",
enableAcceleration: true,
fieldList: [
"<value 1>",
"<value 2>",
],
latestRunInfo: {
earliestScannedTime: 4334.7,
finishedAt: 6811.22,
latestScannedTime: 5303.3,
objectCount: 9489.04,
},
scanMode: "detailed",
},
},
storageLocationId: "<id>",
viewName: "<value>",
},
});
console.log(result);
} catch (error) {
// The base class for HTTP error responses
if (error instanceof errors.CriblControlPlaneError) {
console.log(error.message);
console.log(error.statusCode);
console.log(error.body);
console.log(error.headers);
// Depending on the method different errors may be thrown
if (error instanceof errors.ErrorT) {
console.log(error.data$.message); // string
}
}
}
}
run();Primary errors:
CriblControlPlaneError: The base class for HTTP error responses.ErrorT: Unexpected error. Status code500.
Less common errors (7)
Network errors:
ConnectionError: HTTP client was unable to make a request to a server.RequestTimeoutError: HTTP request timed out due to an AbortSignal signal.RequestAbortedError: HTTP request was aborted by the client.InvalidRequestError: Any input used to create a request is invalid.UnexpectedClientError: Unrecognised or unexpected error.
Inherit from CriblControlPlaneError:
HealthServerStatusError: Healthy status. Status code420. Applicable to 1 of 63 methods.*ResponseValidationError: Type mismatch between the data returned from the server and the structure expected by the SDK. Seeerror.rawValuefor the raw value anderror.pretty()for a nicely formatted multi-line string.
* Check the method documentation to see if the error is applicable.
The TypeScript SDK makes API calls using an HTTPClient that wraps the native
Fetch API. This
client is a thin wrapper around fetch and provides the ability to attach hooks
around the request lifecycle that can be used to modify the request or handle
errors and response.
The HTTPClient constructor takes an optional fetcher argument that can be
used to integrate a third-party HTTP client or when writing tests to mock out
the HTTP client and feed in fixtures.
The following example shows how to use the "beforeRequest" hook to to add a
custom header and a timeout to requests and how to use the "requestError" hook
to log errors:
import { CriblControlPlane } from "cribl-control-plane";
import { HTTPClient } from "cribl-control-plane/lib/http";
const httpClient = new HTTPClient({
// fetcher takes a function that has the same signature as native `fetch`.
fetcher: (request) => {
return fetch(request);
}
});
httpClient.addHook("beforeRequest", (request) => {
const nextRequest = new Request(request, {
signal: request.signal || AbortSignal.timeout(5000)
});
nextRequest.headers.set("x-custom-header", "custom value");
return nextRequest;
});
httpClient.addHook("requestError", (error, request) => {
console.group("Request Error");
console.log("Reason:", `${error}`);
console.log("Endpoint:", `${request.method} ${request.url}`);
console.groupEnd();
});
const sdk = new CriblControlPlane({ httpClient: httpClient });You can setup your SDK to emit debug logs for SDK requests and responses.
You can pass a logger that matches console's interface as an SDK option.
Warning
Beware that debug logging will reveal secrets, like API tokens in headers, in log messages printed to a console or files. It's recommended to use this feature only during local development and not in production.
import { CriblControlPlane } from "cribl-control-plane";
const sdk = new CriblControlPlane({ debugLogger: console });You can also enable a default debug logger by setting an environment variable CRIBLCONTROLPLANE_DEBUG to true.