Skip to content

FR: please support streaming of logs #122

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
linde opened this issue Apr 28, 2025 · 8 comments · May be fixed by #177
Open

FR: please support streaming of logs #122

linde opened this issue Apr 28, 2025 · 8 comments · May be fixed by #177
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@linde
Copy link

linde commented Apr 28, 2025

was in a situation where I had a workload I wanted to watch when it was starting up (ie tailing its logs). this workload happend to take a long time to become scheduled/runnable, so i was hoping kubectl ai could support a prompt along the lines of the following:

could you please wait until the workload-with-warmup deployment is running, then tail its logs

it is worth noting the following works just as intended:

could you please wait until the workload-with-warmup deployment is running, then cat its logs

it prints the logs and even remarks that there is just one line at that point:

>>> could you please wait until the workload-with-warmup deployment is running, then cat its logs

  Okay, I will wait for the  workload-with-warmup  deployment to become available 
  and then fetch its logs. This might take a moment.                              

  Running: kubectl wait --for=condition=Available deployment/workload-with-warmup --timeout=120s

  The deployment  workload-with-warmup  is now available! ✅                      
                                                                                  
  Now, I'll fetch the logs for you.                                               

  Running: kubectl logs deployment/workload-with-warmup

  Okay, the deployment  workload-with-warmup  is running and here are the latest  
  logs from its default container ( workload ):                                   
                                                                                  
    Mon Apr 28 18:41:50 UTC 2025                                                  
                                                                                  
  It seems there's only one log line since the deployment became available. Let me
  know if you need anything else! 😊                                              

when I asked it to tail the logs, it does this which is remarkable, but doesn't handle the streaming part:

$ kubectl ai 

  Hey there, what can I help you with today?                                      



>>> could you please wait until the workload-with-warmup deployment is running, then tail its logs           

  Okay, I can do that. I'll wait for the deployment  workload-with-warmup  to     
  become                                                                          
  available and then start streaming its logs.                                    


  Running: kubectl wait --for=condition=available deployment/workload-with-warmup --timeout=300s

  Running: kubectl logs -f deployment/workload-with-warmup

it just never returns.

I get this is prob a challenge to handle the streaming, just filing a FR to capture the useful CUJ.

here is the config I had for my "workload":

apiVersion: apps/v1
kind: Deployment
metadata:
  name: workload-with-warmup
spec:
  replicas: 1
  selector:
    matchLabels:
      app: workload-with-warmup
  template:
    metadata:
      labels:
        app: workload-with-warmup
    spec:
      initContainers:
        - name: warmup
          image: ubuntu:latest
          args: ['bash', '-c', 'for i in `seq 1 30`; do echo warming up $i; sleep 1s; done']
      containers:
        - name: workload
          image: ubuntu:latest
          args: ['bash', '-c', 'while true; do sleep 3s && date; done']
@droot
Copy link
Member

droot commented Apr 28, 2025

This is indeed an interesting use-case. Thanks @linde and appreciate the detailed write up.

Just posterity, sharing the explanation for current behavior.

The agent will run the command (as suggested by the LLM) and will wait for the command to finish and then use the output from the command and call the LLM. The LLM will then determine if output is enough to satisfy the user's task or it needs to use any other tool.

So, in this case, log -f will never finish so the agent hangs for ever. We ran into somewhat similar solution with kubectl edit where user is expected to edit a resource (with whatever editor is configure). Our tool harness intercept such commands that require interaction and instructs the LLM to pick an alternative command. Like in kubectl edit case, it goes and tries to use patch (We also added explicit instruction to avoid commands that requires user interaction).

Brainstorming a few ideas:

  • tool harness can intercept commands like tail -f and reject them to select an alternative. This improves on the current UX where the command just appears to hangs and leave the user in confusion.
  • show the output of running a command (note: we don't that today)... stream cmd("kubectl tail -f").stdout to kubectl.stdout and handle CTRL-C such that user is not bailed out of the REPL loop.

/cc @justinsb

@droot droot added good first issue Good for newcomers enhancement New feature or request labels Apr 28, 2025
@Vinay-Khanagavi
Copy link
Contributor

Hi! 👋
I’ve reviewed the code and the FR for supporting streaming of logs (e.g., kubectl logs -f ...).
Currently, commands are executed and their output is collected only after the command finishes (see executeCommand in pkg/tools/bash_tool.go and pkg/tools/kubectl_tool.go). This works for commands that terminate, but for streaming commands like kubectl logs -f, the process never ends, so the agent hangs and the user never sees any output.

Proposal:
Would you agree with the following approach to improve UX for streaming commands?

  • Detect streaming commands (e.g., those with -f or --follow).
  • Stream output to the user in real time (line by line), instead of waiting for the command to finish.
  • Allow the user to interrupt (e.g., with CTRL-C), and return to the REPL cleanly.
  • Show a message like “Streaming logs, press CTRL-C to stop and return to the prompt.”

This would require changes to the command execution logic (e.g., using cmd.Stdout as a pipe and reading lines as they arrive, with signal handling for interrupts).

Would you be open to this direction?
Or do you prefer to reject streaming commands and suggest alternatives, as is done for kubectl edit?
Let me know your thoughts!

@linde
Copy link
Author

linde commented May 8, 2025

that approach makes sense and is essentially how kubectl -f itself works.

I do worry slightly about hijacking CTRL-C for this, i'd actually expect CTRL-C to break out of the REPL entirely. Not religious on the topic and open to hear of other examples of similar "stream for a while then breakout" experiences from other utilities.

but in general, what you're describing is what I had expected.

@linde
Copy link
Author

linde commented May 8, 2025

actually, I thought about it and less works this way and uses interrupt for this purpose. from the less man page:

F      Scroll forward, and keep trying to read when the end of
              file is reached.  Normally this command would be used when
              already at the end of the file.  It is a way to monitor the
              tail of a file which is growing while it is being viewed.
              (The behavior is similar to the "tail -f" command.)  To
              stop waiting for more data, enter the interrupt character
              (usually ^C).  On systems which support [poll(2)](https://man7.org/linux/man-pages/man2/poll.2.html) you can
              also use ^X or the character specified by the --intr
              option.  If the input is a pipe and the --exit-follow-on-
              close option is in effect, less will automatically stop
              waiting for data when the input side of the pipe is closed.

so, @Vinay-Khanagavi -- yes, that!

@Vinay-Khanagavi
Copy link
Contributor

Thanks for the quick feedback and for sharing the less example!
I agree—following the familiar pattern of streaming output and using CTRL-C to break out (like less and tail -f) makes sense and should feel natural to users.
I’ll proceed with this approach:

  • Stream output for commands like kubectl logs -f
  • Allow CTRL-C to stop streaming and return to the REPL (not exit the whole tool)
  • Add a message to clarify how to exit streaming mode

@Vinay-Khanagavi
Copy link
Contributor

Vinay-Khanagavi commented May 8, 2025

Just submitted a PR to address this feature request:
feat(tools): add support for streaming command output in bash_tool.go #177

This PR adds real-time streaming for commands like kubectl logs -f, with graceful handling of interrupts (CTRL-C) to return to the prompt—just as described in this issue.

Big thanks to @linde for the super clear explanation and the less analogy—really helped me get the streaming/interrupt behavior right. The details and man page reference made it much easier to implement. Appreciate it!

@droot
Copy link
Member

droot commented May 8, 2025

Excellent. I like the solution and thanks for the PR @Vinay-Khanagavi , will take a look soon.

We may have to audit the kubectl commands to ensure that our logic for detecting blocking commands it robust. For example -f flag is also used with kubectl apply -f

@Vinay-Khanagavi
Copy link
Contributor

Just to clarify: I initially opened the PR from my main branch. To keep things organized, I closed that PR and created a new one from a dedicated feature branch for this issue. This should make the history and future changes much cleaner. Thanks for your understanding!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
3 participants