-
-
Notifications
You must be signed in to change notification settings - Fork 32.1k
statistics module: NaN handling in median, median_high an median_low #77265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
When a list or dataframe serie contains NaN(s), the median, median_low and median_high are computed in Python 3.6.4 statistics library, however, the results are wrong. data = [75, 90,85, 92, 95, 80, np.nan]
Median = stats.median(data)
Median_low = stats.median_low(data)
Median_high = stats.median_high(data)
The results from above return ALL 90 which are incorrect. Correct answers should be: |
Will just removing all np.nan values do the job? Btw the values will be: |
Unfortunately, I don't think it's that simple. You want consistency across the various library calls, so if the various |
Well if i dont consider np.nan as missing data and consider all other values then the answer being 90 is correct,right? |
How do you deduce that? Why 90 rather than 85 (or 87.5, or some other value)? For what it's worth, NumPy gives a result of NaN for the median of an array that contains NaNs: >>> np.median([1.0, 2.0, 3.0, np.nan])
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/function_base.py:4033: RuntimeWarning: Invalid value encountered in median
r = func(a, **kwargs)
nan |
On Fri, Mar 16, 2018 at 02:32:36PM +0000, Mark Dickinson wrote:
By default, R gives the median of a list containing either NaN or NA
but you can ignore them too:
|
Just to make sure we are focused on the issue, the reported bug is with the statistics library (not with numpy). It happens, when there is at least one missing value in the data and involves the computation of the median, median_low and median_high using the statistics library. When there is no missing values (NaNs) in the data, computing the median, median_high and median_low from the statistics library work fine. import numpy
import statistics as stats
data = [75, 90,85, 92, 95, 80, np.nan]
Median = stats.median(data)
Median_high = stats.median_high(data)
Median_low = stats.median_low(data)
print("The incorrect Median is", Median)
The incorrect Median is, 90
print("The incorrect median high is", Median_high)
The incorrect median high is, 90
print("The incorrect median low is", Median_low)
The incorrect median low is, 90
## Mean returns nan
Mean = stats.mean(data)
prin("The mean is", Mean)
The mean is, nan Now, when we drop the missing values, we have: |
So From the above i am to conclude that removing np.nan is the best path to be taken? Also the above step is to be included in median_grouped as well right? |
If we are trying to fix this, the behavior should be like computing the mean or harmonic mean with the statistics library when there are missing values in the data. At least that way, it is consistent with how the statistics library works when computing with NaNs in the data. Then again, it should be mentioned somewhere in the docs. import statistics as stats
import numpy as np
import pandas as pd
data = [75, 90,85, 92, 95, 80, np.nan]
stats.mean(data)
nan
stats.harmonic_mean(data)
nan
stats.stdev(data)
nan
As you can see, when there is a missing value, computing the mean, harmonic mean and sample standard deviation with the statistics library
return a nan.
However, with the median, median_high and median_low, it computes those statistics incorrectly with the missing values present in the data.
It is better to return a nan, then let the user drop (or resolve) any missing values before computing.
## Another example using pandas serie
df = pd.DataFrame(data, columns=['data'])
df.head()
data
0 75.0
1 90.0
2 85.0
3 92.0
4 95.0
5 80.0
6 NaN
### Use the statistics library to compute the median of the serie
stats.median(df1['data'])
90
## Pandas returns the correct median by dropping the missing values
## Now use pandas to compute the median of the serie with missing value
df['data'].median()
87.5 I did not test the median_grouped in statistics library, but will let you know afterwards if its affected as well. |
I want to revisit this for 3.8. I agree that the current implementation-dependent behaviour when there are NANs in the data is troublesome. But I don't think that there is a single right answer. I also agree with Mark that if we change median, we ought to change the other functions so that people can get consistent behaviour. It wouldn't be good for median to ignore NANs and mean to process them. I'm inclined to add a parameter to the statistics functions to deal with NANs, that allow the caller to select from:
I think that raise/return/ignore will cover most use-cases for NANs, and the default will be suitable for the "easy cases" where there are no NANs, without paying any performance penalty if you already know your data has no NANs. Thoughts? I'm especially looking for ideas on what to call the first option. |
I believe that the current behavior of I can think of absolutely no way to characterize these as reasonable results: Python 3.7.1 | packaged by conda-forge | (default, Nov 13 2018, 09:50:42)
>>> statistics.median([9, 9, 9, nan, 1, 2, 3, 4, 5])
1
>>> statistics.median([9, 9, 9, nan, 1, 2, 3, 4])
nan |
Based on a quick review of the python docs, the bug report, PEP-450
This implies dividing this issue into two parts: legacy and future. For more information, see: |
Reproduced in 3.11: >>> import numpy as np
>>> import statistics as stats
>>> data = [75, 90,85, 92, 95, 80, np.nan]
>>> stats.median(data)
90
>>> stats.median_low(data)
90
>>> stats.median_high(data)
90 |
[Steven]
def remove_nans(iterable):
"Remove float('NaN') and other objects not equal to themselves"
return [x for x in iterable if x == x] |
See thread on Python-Ideas. https://mail.python.org/archives/list/[email protected]/thread/EDRF2NR4UOYMSKE64KDI2SWUMKPAJ3YM/ |
…count (#94676) * Document NaN handling in functions that sort or count * Update Doc/library/statistics.rst Co-authored-by: Erlend Egeberg Aasland <[email protected]> * Update Doc/library/statistics.rst Co-authored-by: Erlend Egeberg Aasland <[email protected]> * Fix trailing whitespace and rewrap text Co-authored-by: Erlend Egeberg Aasland <[email protected]>
…rt or count (pythonGH-94676) * Document NaN handling in functions that sort or count * Update Doc/library/statistics.rst Co-authored-by: Erlend Egeberg Aasland <[email protected]> * Update Doc/library/statistics.rst Co-authored-by: Erlend Egeberg Aasland <[email protected]> * Fix trailing whitespace and rewrap text Co-authored-by: Erlend Egeberg Aasland <[email protected]> (cherry picked from commit ef61b25) Co-authored-by: Raymond Hettinger <[email protected]>
…rt or count (pythonGH-94676) * Document NaN handling in functions that sort or count * Update Doc/library/statistics.rst Co-authored-by: Erlend Egeberg Aasland <[email protected]> * Update Doc/library/statistics.rst Co-authored-by: Erlend Egeberg Aasland <[email protected]> * Fix trailing whitespace and rewrap text Co-authored-by: Erlend Egeberg Aasland <[email protected]> (cherry picked from commit ef61b25) Co-authored-by: Raymond Hettinger <[email protected]>
Closing, since it looks like the documentation change has been made. Raymond additionally suggested adding a function to remove NaNs from a list; judging by the fact that the documentation change uses filterfalse + isnan, it appears we maybe elected not to make this addition (which would probably be deserving of its own issue anyway). |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: