Replies: 3 comments
-
Thanks @epage -- I was writing a summary of what the kernel is looking for, which you may find useful, so I paste it here. For We would essentially need the location of the test (path/file, line and "test number"), plus the content/text of the test. This will be written (by custom steps in the kernel build system) into a generated file, built and eventually run at some point within the kernel. Currently we figure those out (i.e. path etc.) via a hack on top of the We would also like to have the There are also other bits of information that we may want to annotate the tests with and that For |
Beta Was this translation helpful? Give feedback.
-
I think we want to eventually expose extra KUnit features to Rust kernel tests (such as expectations, skipping of tests, mocking, and so on). However, it remains to be seen how that will look like and how what Rust provides fits with that. However, "static" annotations like |
Beta Was this translation helpful? Give feedback.
-
To be clear, we currently support this within the kernel via the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I had a discussion with Miguel Ojeda (@ojeda) today about the needs of the Linux kernel for testing
Historically Linux's testing strategy has been decentralized.
Overall, our focus in those post will be on kunit. This is a Linux kernel framework that manages of running all tests, whether C or Rust. This is effectively taking the place of
cargo test
.The first area we focused on was doc tests. They have a hack today where they run rustdoc with a custom runner (using an unstable CLI, see rust-lang/rust#102981) with a custom runner that just dumps all of the tests to a file. They then merge all of these files and register it with kunit.
See also https://lore.kernel.org/rust-for-linux/[email protected]/
The main issues are
I think this aligns with my interest in trying to make all test runners appear the same and to try to speed up doc tests. I'll likely want that same metadata as Miguel dumped all at once for me to do with as I desire (and report it back as desired).
The second area was on kunit's special requirements on tests
All tests must report failures back up through a context pointer. The first way this hits is with
assert_eq
. They have akunit_tests
suite (mod
) macro that imports their customassert_eq
. The second is access the pointer. Currently, they hack that in behind the scenes. One thought I had for this is if anassert
reported errors back up throughResult
then the macro-generated caller can take that and register it with the context pointer.They need to register tests.
kunit_tests
suite (mod
) macro enumerates all tests with a#[test]
macro and rewrites them. Distributed slice work could be a big help (#40).They need test markers (fast, slow, stress). This could just be handled with a custom test macro. Unsure if reusing our work on test harnesses would help or not.
I asked about runtime detection support (like hardware) and ignoring of tests and there wasn't much interest, at least at this time.
We also got side tracked talking abut cargo. It sounds like they have similar needs to application developers of something around cargo.
When they did try to use cargo
cfg
They are building
std
manually and ifbuild-std
became the only way to build it, then that could push them to cargo again.Beta Was this translation helpful? Give feedback.
All reactions