Description
One of the major motivations for lexical-core's recent developments has been support for parsing floats of different formats, most notably JSON numbers.
A few notable differences exist in comparison to Rust floats, or those in other languages. For example, with rust literals and rust strings. Providing a function of the signature fn parse_tokenized(integral: &[u8], fractional: &[u8], exponent: i32, negative: bool);
would therefore allow users to validate their own float formats, while then letting fast-float-rust do the majority of the heavy lifting. It would also not accept special floats.
This should require minimal changes in the parsing implementation, while making the library much more suitable for general-purpose applications.
"NaN" // valid
"nan" // invalid
"1.23" // valid
"1.23e" // invalid
"1." // valid
".1" // valid
"1.23e5" // valid
"+1.23e5" // valid
"-1.23e5" // valid
Meanwhile, in JSON, we get the following:
"NaN" // invalid
"nan" // invalid
"1.23" // valid
"1.23e" // invalid
"1." // invalid
".1" // invalid
"1.23e5" // valid
"+1.23e5" // invalid
"-1.23e5" // valid
This can extend to various markup languages, like TOML, YAML (which has the same rules as JSON), XML, and others.