Skip to content

MRG: Remove double-backticks #5120

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 12, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions mne/beamformer/_lcmv.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
from ..channels.channels import _contains_ch_type

depr_message = ("This function is deprecated and will be removed in 0.17, "
"please use `make_lcmv` and `%s` instead.")
"please use :func:`make_lcmv` and :func:`%s` instead.")


def _reg_pinv(x, reg):
Expand Down Expand Up @@ -520,7 +520,7 @@ def apply_lcmv(evoked, filters, max_ori_out='signed', verbose=None):

See Also
--------
apply_lcmv_raw, apply_lcmv_epochs
make_lcmv, apply_lcmv_raw, apply_lcmv_epochs
"""
_check_reference(evoked)

Expand Down Expand Up @@ -569,7 +569,7 @@ def apply_lcmv_epochs(epochs, filters, max_ori_out='signed',

See Also
--------
apply_lcmv_raw, apply_lcmv
make_lcmv, apply_lcmv_raw, apply_lcmv
"""
_check_reference(epochs)

Expand Down Expand Up @@ -622,7 +622,7 @@ def apply_lcmv_raw(raw, filters, start=None, stop=None, max_ori_out='signed',

See Also
--------
apply_lcmv_epochs, apply_lcmv
make_lcmv, apply_lcmv_epochs, apply_lcmv
"""
_check_reference(raw)

Expand Down
4 changes: 2 additions & 2 deletions mne/decoding/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@ class LinearModel(BaseEstimator):

Attributes
----------
``filters_`` : ndarray, shape ([n_targets], n_features)
filters_ : ndarray, shape ([n_targets], n_features)
If fit, the filters used to decompose the data.
``patterns_`` : ndarray, shape ([n_targets], n_features)
patterns_ : ndarray, shape ([n_targets], n_features)
If fit, the patterns used to restore M/EEG signals.

Notes
Expand Down
16 changes: 8 additions & 8 deletions mne/decoding/csp.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,13 @@ class CSP(TransformerMixin, BaseEstimator):

Attributes
----------
``filters_`` : ndarray, shape (n_components, n_channels)
filters_ : ndarray, shape (n_components, n_channels)
If fit, the CSP components used to decompose the data, else None.
``patterns_`` : ndarray, shape (n_components, n_channels)
patterns_ : ndarray, shape (n_components, n_channels)
If fit, the CSP patterns used to restore M/EEG signals, else None.
``mean_`` : ndarray, shape (n_components,)
mean_ : ndarray, shape (n_components,)
If fit, the mean squared power for each component.
``std_`` : ndarray, shape (n_components,)
std_ : ndarray, shape (n_components,)
If fit, the std squared power for each component.

See Also
Expand Down Expand Up @@ -702,13 +702,13 @@ class SPoC(CSP):

Attributes
----------
``filters_`` : ndarray, shape (n_components, n_channels)
filters_ : ndarray, shape (n_components, n_channels)
If fit, the SPoC spatial filters, else None.
``patterns_`` : ndarray, shape (n_components, n_channels)
patterns_ : ndarray, shape (n_components, n_channels)
If fit, the SPoC spatial patterns, else None.
``mean_`` : ndarray, shape (n_components,)
mean_ : ndarray, shape (n_components,)
If fit, the mean squared power for each component.
``std_`` : ndarray, shape (n_components,)
std_ : ndarray, shape (n_components,)
If fit, the std squared power for each component.

See Also
Expand Down
4 changes: 2 additions & 2 deletions mne/decoding/ems.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ class EMS(TransformerMixin, EstimatorMixin):

Attributes
----------
``filters_`` : ndarray, shape (n_channels, n_times)
filters_ : ndarray, shape (n_channels, n_times)
The set of spatial filters.
``classes_`` : ndarray, shape (n_classes,)
classes_ : ndarray, shape (n_classes,)
The target classes.

References
Expand Down
14 changes: 7 additions & 7 deletions mne/decoding/receptive_field.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,16 +55,16 @@ class ReceptiveField(BaseEstimator):

Attributes
----------
``coef_`` : array, shape ([n_outputs, ]n_features, n_delays)
coef_ : array, shape ([n_outputs, ]n_features, n_delays)
The coefficients from the model fit, reshaped for easy visualization.
During :meth:`mne.decoding.ReceptiveField.fit`, if ``y`` has one
dimension (time), the ``n_outputs`` dimension here is omitted.
``patterns_`` : array, shape ([n_outputs, ]n_features, n_delays)
patterns_ : array, shape ([n_outputs, ]n_features, n_delays)
If fit, the inverted coefficients from the model.
``delays_``: array, shape (n_delays,), dtype int
delays_ : array, shape (n_delays,), dtype int
The delays used to fit the model, in indices. To return the delays
in seconds, use ``self.delays_ / self.sfreq``
``valid_samples_`` : slice
valid_samples_ : slice
The rows to keep during model fitting after removing rows with
missing values due to time delaying. This can be used to get an
output equivalent to using :func:`numpy.convolve` or
Expand Down Expand Up @@ -292,9 +292,9 @@ def predict(self, X):
def score(self, X, y):
"""Score predictions generated with a receptive field.

This calls `self.predict`, then masks the output of this
and `y` with `self.mask_prediction_`. Finally, it passes
this to a `sklearn` scorer.
This calls ``self.predict``, then masks the output of this
and ``y` with ``self.mask_prediction_``. Finally, it passes
this to a :mod:`sklearn.metrics` scorer.

Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion mne/decoding/search_light.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ class SlidingEstimator(BaseEstimator, TransformerMixin):

Attributes
----------
``estimators_`` : array-like, shape (n_tasks,)
estimators_ : array-like, shape (n_tasks,)
List of fitted scikit-learn estimators (one per task).
"""

Expand Down
2 changes: 1 addition & 1 deletion mne/decoding/transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ class Vectorizer(TransformerMixin):

Attributes
----------
``features_shape_`` : tuple
features_shape_ : tuple
Stores the original shape of data.
"""

Expand Down
2 changes: 1 addition & 1 deletion mne/io/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1557,7 +1557,7 @@ def crop(self, tmin=0.0, tmax=None):
last_samp are set accordingly.

Thus function operates in-place on the instance.
Use :meth:`mne.Raw.copy` if operation on a copy is desired.
Use :meth:`mne.io.Raw.copy` if operation on a copy is desired.

Parameters
----------
Expand Down
16 changes: 8 additions & 8 deletions mne/preprocessing/ica.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,23 +206,23 @@ class ICA(ContainsMixin):
ch_names : list-like
Channel names resulting from initial picking.
The number of components used for ICA decomposition.
``n_components_`` : int
n_components_ : int
If fit, the actual number of components used for ICA decomposition.
n_pca_components : int
See above.
max_pca_components : int
The number of components used for PCA dimensionality reduction.
verbose : bool, str, int, or None
See above.
``pca_components_`` : ndarray
pca_components_ : ndarray
If fit, the PCA components
``pca_mean_`` : ndarray
pca_mean_ : ndarray
If fit, the mean vector used to center the data before doing the PCA.
``pca_explained_variance_`` : ndarray
pca_explained_variance_ : ndarray
If fit, the variance explained by each PCA component
``mixing_matrix_`` : ndarray
mixing_matrix_ : ndarray
If fit, the mixing matrix to restore observed data, else None.
``unmixing_matrix_`` : ndarray
unmixing_matrix_ : ndarray
If fit, the matrix to unmix observed data, else None.
exclude : list
List of sources indices to exclude, i.e. artifact components identified
Expand All @@ -234,9 +234,9 @@ class ICA(ContainsMixin):
again. To dump this 'artifact memory' say: ica.exclude = []
info : None | instance of Info
The measurement info copied from the object fitted.
``n_samples_`` : int
n_samples_ : int
the number of samples used on fit.
``labels_`` : dict
labels_ : dict
A dictionary of independent component indices, grouped by types of
independent components. This attribute is set by some of the artifact
detection functions.
Expand Down
10 changes: 5 additions & 5 deletions mne/preprocessing/xdawn.py
Original file line number Diff line number Diff line change
Expand Up @@ -364,17 +364,17 @@ class Xdawn(_XdawnTransformer):

Attributes
----------
``filters_`` : dict of ndarray
filters_ : dict of ndarray
If fit, the Xdawn components used to decompose the data for each event
type, else empty.
``patterns_`` : dict of ndarray
patterns_ : dict of ndarray
If fit, the Xdawn patterns used to restore the signals for each event
type, else empty.
``evokeds_`` : dict of evoked instance
evokeds_ : dict of evoked instance
If fit, the evoked response for each event type.
``event_id_`` : dict of event id
event_id_ : dict of event id
The event id.
``correct_overlap_``: bool
correct_overlap_ : bool
Whether overlap correction was applied.

Notes
Expand Down
2 changes: 1 addition & 1 deletion mne/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -671,7 +671,7 @@ def _update_doc(self, olddoc):
if self.extra:
newdoc = "%s: %s" % (newdoc, self.extra)
if olddoc:
newdoc = "%s\n\n%s" % (newdoc, olddoc)
newdoc = "%s\n\n %s" % (newdoc, olddoc)
return newdoc


Expand Down