Skip to content

Add CLI functionality for fetching connect logs #3123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

mchoudhary-cflt
Copy link
Member

@mchoudhary-cflt mchoudhary-cflt commented Jun 17, 2025

Release Notes

Breaking Changes

  • PLACEHOLDER

New Features

  • Add logs subcommand for connect command to fetch connector logs.

Bug Fixes

  • PLACEHOLDER

Checklist

  • I have successfully built and used a custom CLI binary, without linter issues from this PR.
  • I have clearly specified in the What section below whether this PR applies to Confluent Cloud, Confluent Platform, or both.
  • I have verified this PR in Confluent Cloud pre-prod or production environment, if applicable.
  • I have verified this PR in Confluent Platform on-premises environment, if applicable.
  • I have attached manual CLI verification results or screenshots in the Test & Review section below.
  • I have added appropriate CLI integration or unit tests for any new or updated commands and functionality.
  • I confirm that this PR introduces no breaking changes or backward compatibility issues.
  • I have indicated the potential customer impact if something goes wrong in the Blast Radius section below.
  • I have put checkmarks below confirming that the feature associated with this PR is enabled in:
    • Confluent Cloud prod
    • Confluent Cloud stag
    • Confluent Platform
    • Check this box if the feature is enabled for certain organizations only

What

Add the confluent connect logs command for fetching connector logs.

These are Confluent Cloud resources.

Blast Radius

None. This is an additive change, so no existing commands should be affected.

References

https://confluentinc.atlassian.net/browse/LOGGING-3238

Test & Review

Connect Logs

confluent connect logs lcc-devc9myo50 --start-time "2025-06-16T05:35:00Z" --end-time "2025-06-16T05:45:00Z" --level "INFO"
Found 2 log entries for connector lcc-devc9myo50:

         Timestamp         | Level | Task ID |                Message                 
---------------------------+-------+---------+----------------------------------------
  2025-06-16T05:43:23.757Z | INFO  | task-0  | WorkerSourceTask{id=lcc-devc9myo50-0}  
                           |       |         | Committing offsets for 128             
                           |       |         | acknowledged messages                  
  2025-06-16T05:44:23.761Z | INFO  | task-0  | WorkerSourceTask{id=lcc-devc9myo50-0}  
                           |       |         | Committing offsets for 130             
                           |       |         | acknowledged messages

JSON Format

confluent connect logs lcc-devc9myo50 --start-time "2025-06-16T05:35:00Z" --end-time "2025-06-16T05:45:00Z" --level "INFO" --output "json"
[
  {
    "timestamp": "2025-06-16T05:44:23.761Z",
    "level": "INFO",
    "message": "WorkerSourceTask{id=lcc-devc9myo50-0} Committing offsets for 130 acknowledged messages",
    "task_id": "task-0",
    "id": "lcc-devc9myo50"
  },
  {
    "timestamp": "2025-06-16T05:43:23.757Z",
    "level": "INFO",
    "message": "WorkerSourceTask{id=lcc-devc9myo50-0} Committing offsets for 128 acknowledged messages",
    "task_id": "task-0",
    "id": "lcc-devc9myo50"
  }
]

YAML Format

confluent connect logs lcc-devc9myo50 --start-time "2025-06-16T05:35:00Z" --end-time "2025-06-16T05:45:00Z" --level "INFO" --output "yaml"
- timestamp: "2025-06-16T05:44:23.761Z"
  level: INFO
  message: WorkerSourceTask{id=lcc-devc9myo50-0} Committing offsets for 130 acknowledged messages
  taskid: task-0
  id: lcc-devc9myo50
  exception: null
- timestamp: "2025-06-16T05:43:23.757Z"
  level: INFO
  message: WorkerSourceTask{id=lcc-devc9myo50-0} Committing offsets for 128 acknowledged messages
  taskid: task-0
  id: lcc-devc9myo50
  exception: null

@Copilot Copilot AI review requested due to automatic review settings June 17, 2025 04:59
@mchoudhary-cflt mchoudhary-cflt requested a review from a team as a code owner June 17, 2025 04:59
@confluent-cla-assistant
Copy link

🎉 All Contributor License Agreements have been signed. Ready to merge.
Please push an empty commit if you would like to re-run the checks to verify CLA status for all contributors.

Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds a new connect logs subcommand to query connector logs via the logging API, including test-server support, client implementation, CLI integration, pagination state, and fixtures for output formats.

  • Mock server: added a handler for /logs/v1/search and registered it in the test router
  • Client & config: introduced SearchConnectorLogs in ccloudv2, plus ConnectLogsQueryState in context for pagination
  • CLI: implemented connect logs command with flags, output formatting, and pagination support, and added integration tests and golden fixtures

Reviewed Changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
test/test-server/connect_handler.go Added handleLogsSearch mock endpoint to simulate the logs API
test/test-server/ccloud_router.go Registered /logs/v1/search route for the new handler
test/fixtures/output/connect/logs/logs.golden Added human-readable output fixture for logs
test/fixtures/output/connect/logs/logs-json.golden Added JSON output fixture for logs
test/fixtures/output/connect/logs/logs-yaml.golden Added YAML output fixture for logs
test/fixtures/output/connect/logs/logs-help.golden Added help output fixture for the logs command
test/connect_test.go Added TestConnectLogs integration tests
pkg/config/environment_context.go Introduced ConnectLogsQueryState field in EnvironmentContext
pkg/config/context.go Implemented GetConnectLogsQueryState/SetConnectLogsQueryState
pkg/config/connect_logs_state.go Defined the ConnectLogsQueryState struct
pkg/ccloudv2/logging.go Added SearchConnectorLogs client method and related types
internal/connect/command_logs.go Added logsCommand implementation for the connect logs subcommand
internal/connect/command.go Registered newLogsCommand in the connect root command
Comments suppressed due to low confidence (3)

pkg/config/connect_logs_state.go:3

  • [nitpick] Field names TaskId and ConnectorId should use Go's preferred acronym style (TaskID, ConnectorID) for consistency with naming conventions.
type ConnectLogsQueryState struct {

internal/connect/command_logs.go:65

  • Consider declaring --next as a boolean flag (BoolVar) instead of string, which would be more idiomatic and eliminate the need for manual string parsing.
cmd.Flags().String("next", "FALSE", "Whether to fetch next page of logs (TRUE, FALSE) after the next execution of the command (optional).")

test/test-server/connect_handler.go:19

  • Missing import statements for encoding/json, net/http, strings, testing, and testify/require. Add the necessary imports so the handler compiles correctly.
type LoggingLogEntry struct {

@mchoudhary-cflt mchoudhary-cflt changed the title add logs subcommand for connect Add logging CLI for connect logs Jun 17, 2025
@mchoudhary-cflt mchoudhary-cflt changed the title Add logging CLI for connect logs Add CLI functionality for fetching connect logs Jun 17, 2025
cmd.Flags().String("start-time", "", "Start time for log query (e.g., 2025-02-01T00:00:00Z).")
cmd.Flags().String("end-time", "", "End time for log query (e.g., 2025-02-01T23:59:59Z).")
cmd.Flags().String("level", "ERROR", "Log level filter (INFO, WARN, ERROR). Defaults to ERROR.")
cmd.Flags().String("task-id", "", "Task ID filter (optional).")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can remove this. Let's match what we are doing in UI

Copy link
Member Author

@mchoudhary-cflt mchoudhary-cflt Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed task-id

Message string `human:"Message" serialized:"message"`
}

func newLogsCommand(prerunner pcmd.PreRunner) *cobra.Command {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can break this into 3 functions: get parameters & validate it, then search and then the output part?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on how other commands are, kept the getting parameters and search part in this function and moved out the rest of the logic into individual functions

Args: cobra.ExactArgs(1),
Example: examples.BuildExampleString(
examples.Example{
Text: "Query connector logs with log level ERROR between the provided time window:",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add an example where someone can search for ERROR & WARN?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

}

// SearchConnectorLogs searches logs for a specific connector using the Logging API
func (c *Client) SearchConnectorLogs(environmentId, kafkaClusterId, connectorId, startTime, endTime string, levels []string, taskId, searchText string, pageSize int, pageToken string) (*LoggingSearchResponse, error) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we break this function into different parts as well?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@sonarqube-confluent

This comment has been minimized.

@sonarqube-confluent

This comment has been minimized.

@sonarqube-confluent

This comment has been minimized.

@sonarqube-confluent

This comment has been minimized.

type logEntryOut struct {
Timestamp string `human:"Timestamp" serialized:"timestamp"`
Level string `human:"Level" serialized:"level"`
TaskId string `human:"Task ID,omitempty" serialized:"task_id,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason why this is omitempty and others are not?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Example: examples.BuildExampleString(
examples.Example{
Text: "Query connector logs with log level ERROR between the provided time window:",
Code: `confluent connect logs lcc-123456 --level ERROR --start-time "2025-02-01T00:00:00Z" --end-time "2025-02-01T23:59:59Z"`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we expect user to provide the start and end times as flags in this format? i am not sure if this is great UX to ask such timestamps from the user in the CLI . @sgagniere your views on this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a bit unfortunate, but it looks like it's necessary for the API call.

But I would include a link to the standard explaining the time format, either in an example or the flag descriptions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's in RFC3339 format with UTC timezone however we don't allow miliseconds. I've updated the flag description to "Start time for log query in RFC3339 format with UTC timezone i.e. YYYY-MM-DDTHH:MM:SSZ (e.g., 2025-02-01T00:00:00Z)" which should make things clear.

Level: level,
SearchText: searchText,
ConnectorId: connectorId,
PageToken: "",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets add omitempty for this instead of providing an empty string here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This struct isn't for json parsing, we are storing the current query parameters and the pagetoken returned in the context state so that at the next query run we can identify if we need to fetch a subsequent page or start a fresh query (determined by whether pageToken is set to "" or a non empty string)

if err != nil {
return err
}
connectorName := connector.Info.GetName()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have a getter for Info?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we do however it returns empty struct when nothing is found, moved it to the getter and added a check at the end after the name is obtained.

return nil
}

logs, err := c.V2Client.SearchConnectorLogs(environmentId, kafkaCluster.ID, connectorName, startTime, endTime, levels, searchText, 200, lastQueryPageToken)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should not use kafkaCluster.ID. Either use a getter here, or check for empty/null case

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

lastLogQuery.ConnectorId == currentLogQuery.ConnectorId) {
lastQueryPageToken = lastLogQuery.PageToken
// If page token for the last query is empty, it means there are no more logs for the current query
if lastQueryPageToken == "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, is this the error case? Something seems off to me here, have we tested this scenario?

Copy link
Member Author

@mchoudhary-cflt mchoudhary-cflt Jun 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mchoudhary@WH9KW4M76R cli % ./confluent connect logs lcc-123 --start-time "2025-06-22T05:35:00Z" --end-time "2025-06-22T05:45:00Z" --level "INFO" --next
No more logs for the current query

This is what the output looks like when the current query has no more pages left. This condition triggers the code to immediately return since there's nothing to do.

func (c *logsCommand) storeQueryInContext(logs *ccloudv2.LoggingSearchResponse, currentLogQuery *config.ConnectLogsQueryState) error {
var err error
if logs.Metadata != nil {
currentLogQuery.PageToken, err = extractPageToken(logs.Metadata.Next)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets use a setter here if possible

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

"github.com/confluentinc/cli/v4/pkg/auth"
"github.com/confluentinc/cli/v4/pkg/errors"
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are all of these always guaranteed to have a value? we may need omitempty for the ones which can have an empty value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Kind string `json:"kind"`
}

func (c *Client) SearchConnectorLogs(environmentId, kafkaClusterId, connectorId, startTime, endTime string, levels []string, searchText string, pageSize int, pageToken string) (*LoggingSearchResponse, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9 parameters is above the limit we have for this. @sgagniere if we are okay with this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah since we're passing a CRN to LoggingSearchRequest, I would prefer if we constructed the CRN outside of this function and passed that in instead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -8,6 +8,7 @@ Available Commands:
cluster Manage Connect clusters.
custom-plugin Manage custom connector plugins.
event Manage log events for managed connectors.
logs Query logs for connectors.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
logs Query logs for connectors.
logs Manage logs for connectors.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@sonarqube-confluent
Copy link

Failed

  • 64.30% Coverage on New Code (is less than 80.00%)

Analysis Details

5 Issues

  • Bug 0 Bugs
  • Vulnerability 0 Vulnerabilities
  • Code Smell 5 Code Smells

Coverage and Duplications

  • Coverage 64.30% Coverage (77.30% Estimated after merge)
  • Duplications No duplication information (0.00% Estimated after merge)

Project ID: cli

View in SonarQube

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants