-
Notifications
You must be signed in to change notification settings - Fork 312
FR: please support streaming of logs #122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This is indeed an interesting use-case. Thanks @linde and appreciate the detailed write up. Just posterity, sharing the explanation for current behavior. The agent will run the command (as suggested by the LLM) and will wait for the command to finish and then use the output from the command and call the LLM. The LLM will then determine if output is enough to satisfy the user's task or it needs to use any other tool. So, in this case, Brainstorming a few ideas:
/cc @justinsb |
Hi! 👋 Proposal:
This would require changes to the command execution logic (e.g., using cmd.Stdout as a pipe and reading lines as they arrive, with signal handling for interrupts). Would you be open to this direction? |
that approach makes sense and is essentially how I do worry slightly about hijacking CTRL-C for this, i'd actually expect CTRL-C to break out of the REPL entirely. Not religious on the topic and open to hear of other examples of similar "stream for a while then breakout" experiences from other utilities. but in general, what you're describing is what I had expected. |
actually, I thought about it and
so, @Vinay-Khanagavi -- yes, that! |
Thanks for the quick feedback and for sharing the
|
Just submitted a PR to address this feature request: This PR adds real-time streaming for commands like kubectl logs -f, with graceful handling of interrupts (CTRL-C) to return to the prompt—just as described in this issue. Big thanks to @linde for the super clear explanation and the less analogy—really helped me get the streaming/interrupt behavior right. The details and man page reference made it much easier to implement. Appreciate it! |
Excellent. I like the solution and thanks for the PR @Vinay-Khanagavi , will take a look soon. We may have to audit the kubectl commands to ensure that our logic for detecting blocking commands it robust. For example |
Just to clarify: I initially opened the PR from my main branch. To keep things organized, I closed that PR and created a new one from a dedicated feature branch for this issue. This should make the history and future changes much cleaner. Thanks for your understanding! |
was in a situation where I had a workload I wanted to watch when it was starting up (ie tailing its logs). this workload happend to take a long time to become scheduled/runnable, so i was hoping
kubectl ai
could support a prompt along the lines of the following:it is worth noting the following works just as intended:
it prints the logs and even remarks that there is just one line at that point:
when I asked it to tail the logs, it does this which is remarkable, but doesn't handle the streaming part:
it just never returns.
I get this is prob a challenge to handle the streaming, just filing a FR to capture the useful CUJ.
here is the config I had for my "workload":
The text was updated successfully, but these errors were encountered: