Skip to content

STY: PEP8/PyFlakes #337

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Aug 11, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion nibabel/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,9 +73,11 @@
bench = Tester().bench
del Tester
except ImportError:
def test(*args, **kwargs): raise RuntimeError('Need numpy >= 1.2 for tests')
def test(*args, **kwargs):
raise RuntimeError('Need numpy >= 1.2 for tests')

from .pkg_info import get_pkg_info as _get_pkg_info


def get_info():
return _get_pkg_info(os.path.dirname(__file__))
26 changes: 13 additions & 13 deletions nibabel/affines.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,9 @@ def apply_affine(aff, pts):
shape = pts.shape
pts = pts.reshape((-1, shape[-1]))
# rzs == rotations, zooms, shears
rzs = aff[:-1,:-1]
trans = aff[:-1,-1]
res = np.dot(pts, rzs.T) + trans[None,:]
rzs = aff[:-1, :-1]
trans = aff[:-1, -1]
res = np.dot(pts, rzs.T) + trans[None, :]
return res.reshape(shape)


Expand All @@ -90,8 +90,8 @@ def to_matvec(transform):
NxM transform matrix in homogeneous coordinates representing an affine
transformation from an (N-1)-dimensional space to an (M-1)-dimensional
space. An example is a 4x4 transform representing rotations and
translations in 3 dimensions. A 4x3 matrix can represent a 2-dimensional
plane embedded in 3 dimensional space.
translations in 3 dimensions. A 4x3 matrix can represent a
2-dimensional plane embedded in 3 dimensional space.

Returns
-------
Expand Down Expand Up @@ -124,8 +124,8 @@ def to_matvec(transform):
def from_matvec(matrix, vector=None):
""" Combine a matrix and vector into an homogeneous affine

Combine a rotation / scaling / shearing matrix and translation vector into a
transform in homogeneous coordinates.
Combine a rotation / scaling / shearing matrix and translation vector intO
a transform in homogeneous coordinates.

Parameters
----------
Expand Down Expand Up @@ -163,7 +163,7 @@ def from_matvec(matrix, vector=None):
"""
matrix = np.asarray(matrix)
nin, nout = matrix.shape
t = np.zeros((nin+1,nout+1), matrix.dtype)
t = np.zeros((nin + 1, nout + 1), matrix.dtype)
t[0:nin, 0:nout] = matrix
t[nin, nout] = 1.
if not vector is None:
Expand All @@ -175,8 +175,8 @@ def append_diag(aff, steps, starts=()):
""" Add diagonal elements `steps` and translations `starts` to affine

Typical use is in expanding 4x4 affines to larger dimensions. Nipy is the
main consumer because it uses NxM affines, whereas we generally only use 4x4
affines; the routine is here for convenience.
main consumer because it uses NxM affines, whereas we generally only use
4x4 affines; the routine is here for convenience.

Parameters
----------
Expand Down Expand Up @@ -219,13 +219,13 @@ def append_diag(aff, steps, starts=()):
aff_plus = np.zeros((old_n_out + n_steps + 1,
old_n_in + n_steps + 1), dtype=aff.dtype)
# Get stuff from old affine
aff_plus[:old_n_out,:old_n_in] = aff[:old_n_out, :old_n_in]
aff_plus[:old_n_out,-1] = aff[:old_n_out,-1]
aff_plus[:old_n_out, :old_n_in] = aff[:old_n_out, :old_n_in]
aff_plus[:old_n_out, -1] = aff[:old_n_out, -1]
# Add new diagonal elements
for i, el in enumerate(steps):
aff_plus[old_n_out+i, old_n_in+i] = el
# Add translations for new affine, plus last 1
aff_plus[old_n_out:,-1] = list(starts) + [1]
aff_plus[old_n_out:, -1] = list(starts) + [1]
return aff_plus


Expand Down
51 changes: 26 additions & 25 deletions nibabel/analyze.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,23 +46,23 @@
The inability to store affines means that we have to guess what orientation the
image has. Most Analyze images are stored on disk in (fastest-changing to
slowest-changing) R->L, P->A and I->S order. That is, the first voxel is the
rightmost, most posterior and most inferior voxel location in the image, and the
next voxel is one voxel towards the left of the image.
rightmost, most posterior and most inferior voxel location in the image, and
the next voxel is one voxel towards the left of the image.

Most people refer to this disk storage format as 'radiological', on the basis
that, if you load up the data as an array ``img_arr`` where the first axis is
the fastest changing, then take a slice in the I->S axis - ``img_arr[:,:,10]`` -
then the right part of the brain will be on the left of your displayed slice.
the fastest changing, then take a slice in the I->S axis - ``img_arr[:,:,10]``
- then the right part of the brain will be on the left of your displayed slice.
Radiologists like looking at images where the left of the brain is on the right
side of the image.

Conversely, if the image has the voxels stored with the left voxels first -
L->R, P->A, I->S, then this would be 'neurological' format. Neurologists like
looking at images where the left side of the brain is on the left of the image.

When we are guessing at an affine for Analyze, this translates to the problem of
whether the affine should consider proceeding within the data down an X line as
being from left to right, or right to left.
When we are guessing at an affine for Analyze, this translates to the problem
of whether the affine should consider proceeding within the data down an X line
as being from left to right, or right to left.

By default we assume that the image is stored in R->L format. We encode this
choice in the ``default_x_flip`` flag that can be True or False. True means
Expand Down Expand Up @@ -153,18 +153,18 @@
header_dtype = np.dtype(header_key_dtd + image_dimension_dtd +
data_history_dtd)

_dtdefs = ( # code, conversion function, equivalent dtype, aliases
_dtdefs = ( # code, conversion function, equivalent dtype, aliases
(0, 'none', np.void),
(1, 'binary', np.void), # 1 bit per voxel, needs thought
(1, 'binary', np.void), # 1 bit per voxel, needs thought
(2, 'uint8', np.uint8),
(4, 'int16', np.int16),
(8, 'int32', np.int32),
(16, 'float32', np.float32),
(32, 'complex64', np.complex64), # numpy complex format?
(32, 'complex64', np.complex64), # numpy complex format?
(64, 'float64', np.float64),
(128, 'RGB', np.dtype([('R','u1'),
('G', 'u1'),
('B', 'u1')])),
(128, 'RGB', np.dtype([('R', 'u1'),
('G', 'u1'),
('B', 'u1')])),
(255, 'all', np.void))

# Make full code alias bank, including dtype column
Expand Down Expand Up @@ -343,7 +343,7 @@ def default_structarr(klass, endianness=None):
hdr_data['dim'] = 1
hdr_data['dim'][0] = 0
hdr_data['pixdim'] = 1
hdr_data['datatype'] = 16 # float32
hdr_data['datatype'] = 16 # float32
hdr_data['bitpix'] = 32
return hdr_data

Expand Down Expand Up @@ -858,7 +858,7 @@ def _chk_bitpix(klass, hdr, fix=False):
rep.problem_level = 10
rep.problem_msg = 'bitpix does not match datatype'
if fix:
hdr['bitpix'] = bitpix # inplace modification
hdr['bitpix'] = bitpix # inplace modification
rep.fix_msg = 'setting bitpix to match datatype'
return hdr, rep

Expand Down Expand Up @@ -897,7 +897,7 @@ class AnalyzeImage(SpatialImage):
""" Class for basic Analyze format image
"""
header_class = AnalyzeHeader
files_types = (('image','.img'), ('header','.hdr'))
files_types = (('image', '.img'), ('header', '.hdr'))
_compressed_exts = ('.gz', '.bz2')

ImageArrayProxy = ArrayProxy
Expand Down Expand Up @@ -931,10 +931,10 @@ def from_file_map(klass, file_map, mmap=True):
mmap : {True, False, 'c', 'r'}, optional, keyword only
`mmap` controls the use of numpy memory mapping for reading image
array data. If False, do not try numpy ``memmap`` for data array.
If one of {'c', 'r'}, try numpy memmap with ``mode=mmap``. A `mmap`
value of True gives the same behavior as ``mmap='c'``. If image
data file cannot be memory-mapped, ignore `mmap` value and read
array from file.
If one of {'c', 'r'}, try numpy memmap with ``mode=mmap``. A
`mmap` value of True gives the same behavior as ``mmap='c'``. If
image data file cannot be memory-mapped, ignore `mmap` value and
read array from file.

Returns
-------
Expand Down Expand Up @@ -971,10 +971,10 @@ def from_filename(klass, filename, mmap=True):
mmap : {True, False, 'c', 'r'}, optional, keyword only
`mmap` controls the use of numpy memory mapping for reading image
array data. If False, do not try numpy ``memmap`` for data array.
If one of {'c', 'r'}, try numpy memmap with ``mode=mmap``. A `mmap`
value of True gives the same behavior as ``mmap='c'``. If image
data file cannot be memory-mapped, ignore `mmap` value and read
array from file.
If one of {'c', 'r'}, try numpy memmap with ``mode=mmap``. A
`mmap` value of True gives the same behavior as ``mmap='c'``. If
image data file cannot be memory-mapped, ignore `mmap` value and
read array from file.

Returns
-------
Expand Down Expand Up @@ -1030,7 +1030,8 @@ def to_file_map(self, file_map=None):
arr_writer = ArrayWriter(data, out_dtype, check_scaling=False)
hdr_fh, img_fh = self._get_fileholders(file_map)
# Check if hdr and img refer to same file; this can happen with odd
# analyze images but most often this is because it's a single nifti file
# analyze images but most often this is because it's a single nifti
# file
hdr_img_same = hdr_fh.same_file_as(img_fh)
hdrf = hdr_fh.get_prepare_fileobj(mode='wb')
if hdr_img_same:
Expand Down
Loading