You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The fuzzer sometimes runs into situations where it builds regexes that
can take a while to execute, such as `\B{10000}`. They fit within the
default size limit, but the search times aren't great. But it's not a
bug. So try to decrease the size limit a bit to try and prevent
timeouts.
We might consider trying to optimize cases like `\B{10000}`. A naive
optimization would be to remove any redundant conditional epsilon
transitions within a single epsilon closure, but that can be tricky to
do a priori. The case of `\B{100000}` is probably easy to detect, but
they can be arbitrarily complex.
Another way to attack this would be to modify, say, the PikeVM to only
compute whether a conditional epsilon transition should be followed once
per haystack position. Right now, I think it is re-computing them even
though it doesn't have to.
0 commit comments