Skip to content

pytest cleaning #2252

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Nov 7, 2017
Merged

pytest cleaning #2252

merged 16 commits into from
Nov 7, 2017

Conversation

djarecka
Copy link
Collaborator

@djarecka djarecka commented Oct 24, 2017

changing str(tmpdir) to tmpdir.strpath in tests (see the discussion in #1916)

update
using more of the tmpdir methods in tests
updating tests (mostly doctests) so the work on read-only system (see #1916)
adding setting for doctests in pytest.ini (and removing some options from doctests)
adding some names to doctests_namespace in conftest.py

@djarecka djarecka requested a review from oesteban October 24, 2017 22:01
@codecov-io
Copy link

codecov-io commented Oct 25, 2017

Codecov Report

Merging #2252 into master will increase coverage by <.01%.
The diff coverage is 90.9%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2252      +/-   ##
==========================================
+ Coverage   72.32%   72.32%   +<.01%     
==========================================
  Files        1183     1183              
  Lines       59008    59008              
  Branches     8490     8490              
==========================================
+ Hits        42677    42679       +2     
+ Misses      14947    14945       -2     
  Partials     1384     1384
Flag Coverage Δ
#smoketests 72.32% <90.9%> (ø) ⬆️
#unittests 69.9% <90.9%> (ø) ⬆️
Impacted Files Coverage Δ
nipype/pipeline/plugins/tests/test_somaflow.py 50% <0%> (ø) ⬆️
nipype/algorithms/tests/test_mesh_ops.py 33.96% <0%> (ø) ⬆️
nipype/pipeline/plugins/tests/test_callback.py 97.14% <100%> (ø) ⬆️
nipype/pipeline/engine/tests/test_engine.py 93.39% <100%> (ø) ⬆️
nipype/testing/fixtures.py 98.83% <100%> (ø) ⬆️
nipype/interfaces/utility/tests/test_base.py 100% <100%> (ø) ⬆️
nipype/interfaces/freesurfer/tests/test_model.py 100% <100%> (ø) ⬆️
nipype/pipeline/plugins/tests/test_debug.py 81.08% <100%> (ø) ⬆️
nipype/algorithms/tests/test_splitmerge.py 100% <100%> (ø) ⬆️
nipype/interfaces/utility/tests/test_wrappers.py 87.03% <100%> (ø) ⬆️
... and 20 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 37c9db6...52f379c. Read the comment docs.

@codecov-io
Copy link

codecov-io commented Oct 25, 2017

Codecov Report

Merging #2252 into master will increase coverage by 0.12%.
The diff coverage is 93.3%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master   #2252      +/-   ##
=========================================
+ Coverage   72.37%   72.5%   +0.12%     
=========================================
  Files        1188    1188              
  Lines       59184   59131      -53     
  Branches     8506    8505       -1     
=========================================
+ Hits        42837   42872      +35     
+ Misses      14961   14873      -88     
  Partials     1386    1386
Flag Coverage Δ
#smoketests 72.5% <93.3%> (+0.12%) ⬆️
#unittests 70.12% <93.3%> (+0.16%) ⬆️
Impacted Files Coverage Δ
nipype/interfaces/niftyseg/lesions.py 100% <ø> (ø) ⬆️
nipype/interfaces/fsl/aroma.py 68.42% <ø> (ø) ⬆️
nipype/interfaces/afni/model.py 85.45% <ø> (ø) ⬆️
nipype/interfaces/freesurfer/model.py 64.14% <ø> (ø) ⬆️
nipype/interfaces/afni/preprocess.py 80.52% <ø> (ø) ⬆️
nipype/interfaces/ants/segmentation.py 72.79% <ø> (ø) ⬆️
nipype/interfaces/niftyseg/maths.py 77.77% <ø> (ø) ⬆️
nipype/interfaces/freesurfer/preprocess.py 65.81% <ø> (ø) ⬆️
nipype/interfaces/niftyfit/qt1.py 100% <ø> (ø) ⬆️
nipype/workflows/dmri/fsl/tests/test_tbss.py 100% <ø> (+82.97%) ⬆️
... and 91 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fc2c5b4...f4d9afe. Read the comment docs.

@@ -43,7 +43,7 @@ def test_dvars(tmpdir):
in_mask=example_data('ds003_sub-01_mc_brainmask.nii.gz'),
save_all=True,
intensity_normalization=0)
os.chdir(str(tmpdir))
os.chdir(tmpdir.strpath)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tmpdir is a py.path.local object which has a lot of methods:

http://py.readthedocs.io/en/latest/path.html#py-path-local-local-file-system-path

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true, you can do tmpdir.chdir() here 👍

@@ -17,7 +17,7 @@


def test_modelgen1(tmpdir):
tempdir = str(tmpdir)
tempdir = tmpdir.strpath
filename1 = os.path.join(tempdir, 'test1.nii')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so this could be simply tmpdir.join('test.nii')

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, can try to find those places automatically

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but py.path.local won't work everywhere, I'm recalling that I did have some problems with it when I was converting tests to pytest, and there were places where string worked better. I can see that that at least for nibabel.save I'll have to keep strings.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i meant places where the methods of py.path.local make sense, such as join. it's not a direct replacement for a string in all cases.

Copy link
Collaborator Author

@djarecka djarecka Oct 26, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my only point was that this test will not work if I change join part to tmpdir.join('test.nii'), it has to be string.
Can always do tmpdir.join('test.nii').strpath if this is preferred (included in the last commit).

@oesteban
Copy link
Contributor

For the doctests maybe a conftest.py file at the top folder with the following:

https://github.com/poldracklab/niworkflows/blob/e77c3c103b76f6eddce94625ba6171c13248d7d5/niworkflows/tests/conftest.py#L12-L14

would work out. Worth trying.

@djarecka
Copy link
Collaborator Author

djarecka commented Oct 27, 2017

@oesteban that might be the best solution. Was trying to find something for doctests only, but haven't found yet.
I mean, you can always add

>>> tmp = getfixture('tmpdir')                                                                                            
>>> old = tmp.chdir()

but would prefer to keep the docstrings as short as possible.

btw. should we use conftest.py to import modules like numpy, nibabel, so we don't have to include imports in doctests?

@satra
Copy link
Member

satra commented Oct 28, 2017

could you please merge with master and resolve conflicts.

@djarecka
Copy link
Collaborator Author

@oesteban what do you think about this approach?
https://stackoverflow.com/questions/46962007/how-to-automatically-change-to-pytest-temporary-directory-for-all-doctests/46991331#46991331

It uses tmpdir and can be done either for doctests only or all tests

@djarecka
Copy link
Collaborator Author

There is one extra problem with changing working directory for all doctests (or all tests). Many files with doctests start with this part and the following tests assume that they are in a directory with data and will fail if I change to a temporary directory.

In conftest.py I can define datadir and add to every single test full path, e.g. ants.inputs.fixed_image = [os.path.join(datadir, 'T1.nii')], but this change will be needed in many places.

@oesteban, @satra let me know if you have any opinion about it. Not sure how important to you is to run everything in readonly systems and is it ok to add this boilerplate to doctests, that should be probably as simple as possible.

Or maybe we want to add some extra code only in tests you're having problem with?

@satra
Copy link
Member

satra commented Oct 29, 2017

@djarecka - i agree that we should not be making the examples more complex. let's just focus on the unit-test cleanup first, and we can tackle the doctest issue later.

we can perhaps change the doctests for the specific interfaces that write files during doctests. that's a much smaller list.

@oesteban - regarding read only systems, why are we thinking of running tests on these systems? can't we test before such a container is made read only?

@satra satra added this to the 0.14.0 milestone Oct 29, 2017
@djarecka
Copy link
Collaborator Author

djarecka commented Oct 29, 2017

@satra yes, you can add those two lines within specific doctests if you want:

>>> tmp = getfixture('tmpdir')                                                                                            
>>> old = tmp.chdir()

do you want me to add anything specific to this PR?

@satra
Copy link
Member

satra commented Oct 30, 2017

@djarecka - knowing which interfaces fail doctests for read only systems would be good. if it's a small list, we can fix those rather than everything.

@mgxd
Copy link
Member

mgxd commented Oct 30, 2017

I think I covered most of the culprits here

@djarecka
Copy link
Collaborator Author

@mgxd ok, I will review those tests

@djarecka
Copy link
Collaborator Author

@satra I'm also reviewing other tests, I remember that I left tempfile in a few places, since they offer methods that I couldn't find with tmpdir, e.g. mkstepm that creates new file with specific suffix and prefix.
Let me know if the goal is totally remove tempfile, there is always a way to do it with tmpdir differently.

@satra
Copy link
Member

satra commented Oct 30, 2017

@djarecka - the goal is to be semantically uniform such that developers can rely on expectations and to use the tmpdir methods when possible.

…ing conftest.py with datadir variable and some basic libraries (so you dont have to import them in doctests)
@djarecka
Copy link
Collaborator Author

@mgxd can you check if those tests are working now in the readonly setting you were using?

Copy link
Contributor

@oesteban oesteban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work, very much needed. Left some stylistic comments.

@@ -43,7 +43,7 @@ def test_dvars(tmpdir):
in_mask=example_data('ds003_sub-01_mc_brainmask.nii.gz'),
save_all=True,
intensity_normalization=0)
os.chdir(str(tmpdir))
os.chdir(tmpdir.strpath)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true, you can do tmpdir.chdir() here 👍

@@ -20,7 +20,7 @@ def check_close(val1, val2):
in1 = example_data('segmentation0.nii.gz')
in2 = example_data('segmentation1.nii.gz')

os.chdir(str(tmpdir))
os.chdir(tmpdir.strpath)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tmpdir.chdir() ?

mskdata = nb.load(in_mask, mmap=NUMPY_MMAP).get_data()
aff = nb.load(in_mask, mmap=NUMPY_MMAP).affine

dwshape = (mskdata.shape[0], mskdata.shape[1], mskdata.shape[2], 6)
dwdata = np.random.normal(size=dwshape)
os.chdir(str(tmpdir))
os.chdir(tmpdir.strpath)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tmpdir.chdir() ?

@@ -74,7 +74,7 @@ class ABoverlap(AFNICommand):
>>> aboverlap.inputs.in_file_a = 'functional.nii'
>>> aboverlap.inputs.in_file_b = 'structural.nii'
>>> aboverlap.inputs.out_file = 'out.mask_ae_overlap.txt'
>>> aboverlap.cmdline # doctest: +ALLOW_UNICODE
Copy link
Contributor

@oesteban oesteban Oct 30, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried this on Python 2 and it works? (now it is a global option 👍 )

@@ -70,13 +70,16 @@ class Dcm2nii(CommandLine):
Examples
========

>>> tmp = getfixture('tmpdir')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please consider the .. testsetup:: directive to hide these two lines in the documentation.

http://www.sphinx-doc.org/en/stable/ext/doctest.html#directive-testsetup

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oesteban : do you have an example how to use it? I'm adding .. testsetup:: before those lines and my tests are working, but everything is still visible if you ask for help()

@@ -524,7 +524,7 @@ def test_itersource_two_join_nodes(tmpdir):

def test_set_join_node_file_input(tmpdir):
"""Test collecting join inputs to a set."""
wd = str(tmpdir)
wd = tmpdir.strpath
os.chdir(wd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tmpdir.chdir()

@@ -524,7 +524,7 @@ def test_itersource_two_join_nodes(tmpdir):

def test_set_join_node_file_input(tmpdir):
"""Test collecting join inputs to a set."""
wd = str(tmpdir)
wd = tmpdir.strpath
os.chdir(wd)
open('test.nii', 'w+').close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tmpdir.join('test.nii').close()

@@ -524,7 +524,7 @@ def test_itersource_two_join_nodes(tmpdir):

def test_set_join_node_file_input(tmpdir):
"""Test collecting join inputs to a set."""
wd = str(tmpdir)
wd = tmpdir.strpath
os.chdir(wd)
open('test.nii', 'w+').close()
open('test2.nii', 'w+').close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tmpdir.join('test2.nii').close()

@@ -547,7 +547,7 @@ def test_set_join_node_file_input(tmpdir):

def test_nested_workflow_join(tmpdir):
"""Test collecting join inputs within a nested workflow"""
wd = str(tmpdir)
wd = tmpdir.strpath
os.chdir(wd)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tmpdir.chdir()

addopts = --doctest-modules
doctest_optionflags = ALLOW_UNICODE NORMALIZE_WHITESPACE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha!

@djarecka
Copy link
Collaborator Author

@oesteban thanks for comments. I'm still reviewing those test, and will try to implement all you points. At the beginning I change some parts automatically, but it didn't actually work as good, so will review chdir parts etc.

@djarecka
Copy link
Collaborator Author

I'm reviewing tests, and tried to remove all remainders of tempfile. There are a few tests that were xfailing or skipping that I was afraid to change, one test that really had no idea why I'm getting errors (pipeline/plugins/tests/test_multiproc_nondaemon.py), and two tests that I still have questions:

  • you will see one test in nipype/interfaces/tests/test_io.py that fails because of too long name, but I'm simply not sure if this is really right that DataSink creates so long path (see the error message)

  • nipype/interfaces/tests/test_nilearn.py this has tests within a class and after introducing temporary file to setup_class teardown_class stopped working (doesn't see self.orig_dir). I simply commented right now, but if anyone understands it, I'd be thankful for letting me know.

@mgxd
Copy link
Member

mgxd commented Oct 31, 2017

@djarecka Here's the error log I ran into (I pulled from commit 8baac42 so this may be outdated) - all i'm really doing is chmod -R a-w . within my editable nipype clone and then running make test-code


====================================================================== FAILURES =======================================================================
__________________________________________________ [doctest] nipype.interfaces.base.SimpleInterface ___________________________________________________
1238     ...     input_spec = DoubleInputSpec
1239     ...     output_spec = DoubleOutputSpec
1240     ...
1241     ...     def _run_interface(self, runtime):
1242     ...          self._results['doubled'] = double(self.inputs.x)
1243     ...          return runtime
1244 
1245     >>> dbl = Double()
1246     >>> dbl.inputs.x = 2
1247     >>> dbl.run().outputs.doubled
UNEXPECTED EXCEPTION: PermissionError(13, 'Permission denied')
Traceback (most recent call last):

  File "/home/mathias/miniconda2/envs/dev3/lib/python3.6/doctest.py", line 1330, in __run
    compileflags, 1), test.globs)

  File "<doctest nipype.interfaces.base.SimpleInterface[6]>", line 1, in <module>

  File "/code/nipype/nipype/interfaces/base.py", line 1080, in run
    mon_sp = ResourceMonitor(proc_pid, freq=mon_freq)

  File "/code/nipype/nipype/utils/profiler.py", line 44, in __init__
    self._logfile = open(self._fname, 'w')

PermissionError: [Errno 13] Permission denied: '.proc-7600_time-1509464139.0193746_freq-1.00'

/code/nipype/nipype/interfaces/base.py:1247: UnexpectedException
__________________________________________________________________ test_importerror ___________________________________________________________________

creating_graphs = ['/tmp/pytest-of-mathias/pytest-2/test_importerror0name0.pck', '/tmp/pytest-of-mathias/pytest-2/test_importerror0name1...p/pytest-of-mathias/pytest-2/test_importerror0name4.pck', '/tmp/pytest-of-mathias/pytest-2/test_importerror0name5.pck']

    @pytest.mark.skipif(have_cv, reason="tests for import error, cviewer available")
    def test_importerror(creating_graphs):
        graphlist = creating_graphs
        group1 = graphlist[:3]
        group2 = graphlist[3:]
    
        nbs = NetworkBasedStatistic()
        nbs.inputs.in_group1 = group1
        nbs.inputs.in_group2 = group2
        nbs.inputs.edge_key = "weight"
    
        with pytest.raises(ImportError) as e:
>           nbs.run()

../../interfaces/cmtk/tests/test_nbs.py:40: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../interfaces/base.py:1080: in run
    mon_sp = ResourceMonitor(proc_pid, freq=mon_freq)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <[AssertionError("Thread.__init__() was not called") raised in repr()] ResourceMonitor object at 0x7f7febedd7b8>, pid = 7600, freq = 1.0
fname = '.proc-7600_time-1509464141.1586614_freq-1.00', python = True

    def __init__(self, pid, freq=5, fname=None, python=True):
        # Make sure psutil is imported
        import psutil
    
        if freq < 0.2:
            raise RuntimeError('Frequency (%0.2fs) cannot be lower than 0.2s' % freq)
    
        if fname is None:
            fname = '.proc-%d_time-%s_freq-%0.2f' % (pid, time(), freq)
        self._fname = fname
>       self._logfile = open(self._fname, 'w')
E       PermissionError: [Errno 13] Permission denied: '.proc-7600_time-1509464141.1586614_freq-1.00'

../../utils/profiler.py:44: PermissionError
_______________________________________________________________ test_coherence_analysis _______________________________________________________________

    @pytest.mark.skipif(no_nitime, reason="nitime is not installed")
    def test_coherence_analysis():
        """Test that the coherence analyzer works """
        import nitime.analysis as nta
        import nitime.timeseries as ts
    
        # This is the nipype interface analysis:
        CA = nitime.CoherenceAnalyzer()
        CA.inputs.TR = 1.89
        CA.inputs.in_file = example_data('fmri_timeseries.csv')
        if display_available:
            tmp_png = tempfile.mkstemp(suffix='.png')[1]
            CA.inputs.output_figure_file = tmp_png
        tmp_csv = tempfile.mkstemp(suffix='.csv')[1]
        CA.inputs.output_csv_file = tmp_csv
    
>       o = CA.run()

../../interfaces/nitime/tests/test_nitime.py:48: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../interfaces/base.py:1080: in run
    mon_sp = ResourceMonitor(proc_pid, freq=mon_freq)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <[AssertionError("Thread.__init__() was not called") raised in repr()] ResourceMonitor object at 0x7f7febcb32b0>, pid = 7600, freq = 1.0
fname = '.proc-7600_time-1509464143.6017606_freq-1.00', python = True

    def __init__(self, pid, freq=5, fname=None, python=True):
        # Make sure psutil is imported
        import psutil
    
        if freq < 0.2:
            raise RuntimeError('Frequency (%0.2fs) cannot be lower than 0.2s' % freq)
    
        if fname is None:
            fname = '.proc-%d_time-%s_freq-%0.2f' % (pid, time(), freq)
        self._fname = fname
>       self._logfile = open(self._fname, 'w')
E       PermissionError: [Errno 13] Permission denied: '.proc-7600_time-1509464143.6017606_freq-1.00'

../../utils/profiler.py:44: PermissionError
_________________________________________________________________ test_mapnode_crash3 _________________________________________________________________

self = <nipype.pipeline.plugins.linear.LinearPlugin object at 0x7f7fe991e240>, graph = <networkx.classes.digraph.DiGraph object at 0x7f7fe991e978>
config = {'check': {'interval': '1209600'}, 'execution': {'crashdump_dir': '/code/nipype', 'crashfile_format': 'pklz', 'create_...ogging': {'interface_level': 'INFO', 'log_directory': '/home/mathias', 'log_rotate': '4', 'log_size': '16384000', ...}}
updatehash = False

    def run(self, graph, config, updatehash=False):
        """Executes a pre-defined pipeline in a serial order.
    
            Parameters
            ----------
    
            graph : networkx digraph
                defines order of execution
            """
    
        if not isinstance(graph, nx.DiGraph):
            raise ValueError('Input must be a networkx digraph object')
        logger.info("Running serially.")
        old_wd = os.getcwd()
        notrun = []
        donotrun = []
        nodes, _ = topological_sort(graph)
        for node in nodes:
            try:
                if node in donotrun:
                    continue
                if self._status_callback:
                    self._status_callback(node, 'start')
>               node.run(updatehash=updatehash)

/code/nipype/nipype/pipeline/plugins/linear.py:43: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = testmapnodecrash.myfunc, updatehash = False

    def run(self, updatehash=False):
        """Execute the node in its directory.
    
            Parameters
            ----------
    
            updatehash: boolean
                Update the hash stored in the output directory
            """
        # check to see if output directory and hash exist
    
        if self.config is None:
            self.config = deepcopy(config._sections)
        else:
            self.config = merge_dict(deepcopy(config._sections), self.config)
        if not self._got_inputs:
            self._get_inputs()
            self._got_inputs = True
        outdir = self.output_dir()
        logger.info("Executing node %s in dir: %s", self.fullname, outdir)
        if op.exists(outdir):
            logger.debug('Output dir: %s', to_str(os.listdir(outdir)))
        hash_info = self.hash_exists(updatehash=updatehash)
        hash_exists, hashvalue, hashfile, hashed_inputs = hash_info
        logger.debug(
            'updatehash=%s, overwrite=%s, always_run=%s, hash_exists=%s',
            updatehash, self.overwrite, self._interface.always_run, hash_exists)
        if (not updatehash and (((self.overwrite is None and
                                  self._interface.always_run) or
                                 self.overwrite) or not
                                hash_exists)):
            logger.debug("Node hash: %s", hashvalue)
    
            # by rerunning we mean only nodes that did finish to run previously
            json_pat = op.join(outdir, '_0x*.json')
            json_unfinished_pat = op.join(outdir, '_0x*_unfinished.json')
            need_rerun = (op.exists(outdir) and not
                          isinstance(self, MapNode) and
                          len(glob(json_pat)) != 0 and
                          len(glob(json_unfinished_pat)) == 0)
            if need_rerun:
                logger.debug(
                    "Rerunning node:\n"
                    "updatehash = %s, self.overwrite = %s, self._interface.always_run = %s, "
                    "os.path.exists(%s) = %s, hash_method = %s", updatehash, self.overwrite,
                    self._interface.always_run, hashfile, op.exists(hashfile),
                    self.config['execution']['hash_method'].lower())
                log_debug = config.get('logging', 'workflow_level') == 'DEBUG'
                if log_debug and not op.exists(hashfile):
                    exp_hash_paths = glob(json_pat)
                    if len(exp_hash_paths) == 1:
                        split_out = split_filename(exp_hash_paths[0])
                        exp_hash_file_base = split_out[1]
                        exp_hash = exp_hash_file_base[len('_0x'):]
                        logger.debug("Previous node hash = %s", exp_hash)
                        try:
                            prev_inputs = load_json(exp_hash_paths[0])
                        except:
                            pass
                        else:
                            logging.logdebug_dict_differences(prev_inputs,
                                                              hashed_inputs)
                cannot_rerun = (str2bool(
                    self.config['execution']['stop_on_first_rerun']) and not
                    (self.overwrite is None and self._interface.always_run))
                if cannot_rerun:
                    raise Exception(("Cannot rerun when 'stop_on_first_rerun' "
                                     "is set to True"))
            hashfile_unfinished = op.join(outdir,
                                          '_0x%s_unfinished.json' %
                                          hashvalue)
            if op.exists(hashfile):
                os.remove(hashfile)
            rm_outdir = (op.exists(outdir) and not
                         (op.exists(hashfile_unfinished) and
                             self._interface.can_resume) and not
                         isinstance(self, MapNode))
            if rm_outdir:
                logger.debug("Removing old %s and its contents", outdir)
                try:
                    rmtree(outdir)
                except OSError as ex:
                    outdircont = os.listdir(outdir)
                    if ((ex.errno == errno.ENOTEMPTY) and (len(outdircont) == 0)):
                        logger.warn(
                            'An exception was raised trying to remove old %s, but the path '
                            'seems empty. Is it an NFS mount?. Passing the exception.', outdir)
                    elif ((ex.errno == errno.ENOTEMPTY) and (len(outdircont) != 0)):
                        logger.debug(
                            'Folder contents (%d items): %s', len(outdircont), outdircont)
                        raise ex
                    else:
                        raise ex
    
            else:
                logger.debug(
                    "%s found and can_resume is True or Node is a MapNode - resuming execution",
                    hashfile_unfinished)
                if isinstance(self, MapNode):
                    # remove old json files
                    for filename in glob(op.join(outdir, '_0x*.json')):
                        os.unlink(filename)
            outdir = make_output_dir(outdir)
            self._save_hashfile(hashfile_unfinished, hashed_inputs)
            self.write_report(report_type='preexec', cwd=outdir)
            savepkl(op.join(outdir, '_node.pklz'), self)
            savepkl(op.join(outdir, '_inputs.pklz'),
                    self.inputs.get_traitsfree())
            try:
>               self._run_interface()

/code/nipype/nipype/pipeline/engine/nodes.py:407: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = testmapnodecrash.myfunc, execute = True, updatehash = False

    def _run_interface(self, execute=True, updatehash=False):
        """Run the mapnode interface
    
            This is primarily intended for serial execution of mapnode. A parallel
            execution requires creation of new nodes that can be spawned
            """
        old_cwd = os.getcwd()
        cwd = self.output_dir()
        os.chdir(cwd)
        self._check_iterfield()
        if execute:
            if self.nested:
                nitems = len(filename_to_list(flatten(getattr(self.inputs,
                                                              self.iterfield[0]))))
            else:
                nitems = len(filename_to_list(getattr(self.inputs,
                                                      self.iterfield[0])))
            nodenames = ['_' + self.name + str(i) for i in range(nitems)]
            self._collate_results(self._node_runner(self._make_nodes(cwd),
>                                                   updatehash=updatehash))

/code/nipype/nipype/pipeline/engine/nodes.py:1326: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = testmapnodecrash.myfunc, nodes = <generator object MapNode._node_runner at 0x7f7fe987a258>

    def _collate_results(self, nodes):
        self._result = InterfaceResult(interface=[], runtime=[],
                                       provenance=[], inputs=[],
                                       outputs=self.outputs)
        returncode = []
        for i, node, err in nodes:
            self._result.runtime.insert(i, None)
            if node.result:
                if hasattr(node.result, 'runtime'):
                    self._result.interface.insert(i, node.result.interface)
                    self._result.inputs.insert(i, node.result.inputs)
                    self._result.runtime[i] = node.result.runtime
                if hasattr(node.result, 'provenance'):
                    self._result.provenance.insert(i, node.result.provenance)
            returncode.insert(i, err)
            if self.outputs:
                for key, _ in list(self.outputs.items()):
                    rm_extra = (self.config['execution']
                                ['remove_unnecessary_outputs'])
                    if str2bool(rm_extra) and self.needed_outputs:
                        if key not in self.needed_outputs:
                            continue
                    values = getattr(self._result.outputs, key)
                    if not isdefined(values):
                        values = []
                    if node.result.outputs:
                        values.insert(i, node.result.outputs.get()[key])
                    else:
                        values.insert(i, None)
                    defined_vals = [isdefined(val) for val in values]
                    if any(defined_vals) and self._result.outputs:
                        setattr(self._result.outputs, key, values)
    
        if self.nested:
            for key, _ in list(self.outputs.items()):
                values = getattr(self._result.outputs, key)
                if isdefined(values):
                    values = unflatten(values, filename_to_list(getattr(self.inputs, self.iterfield[0])))
                setattr(self._result.outputs, key, values)
    
        if returncode and any([code is not None for code in returncode]):
            msg = []
            for i, code in enumerate(returncode):
                if code is not None:
                    msg += ['Subnode %d failed' % i]
                    msg += ['Error:', str(code)]
            raise Exception('Subnodes of node: %s failed:\n%s' %
>                           (self.name, '\n'.join(msg)))
E           Exception: Subnodes of node: myfunc failed:
E           Subnode 0 failed
E           Error:
E           dummy_func() got an unexpected keyword argument 'WRONG'
E           Subnode 1 failed
E           Error:
E           dummy_func() got an unexpected keyword argument 'WRONG'
E           Subnode 2 failed
E           Error:
E           dummy_func() got an unexpected keyword argument 'WRONG'

/code/nipype/nipype/pipeline/engine/nodes.py:1232: Exception

During handling of the above exception, another exception occurred:

tmpdir = local('/tmp/pytest-of-mathias/pytest-2/test_mapnode_crash30')

    @pytest.mark.skipif(sys.version_info < (3,0),
                       reason="the famous segfault #1788")
    def test_mapnode_crash3(tmpdir):
        """Test mapnode crash when mapnode is embedded in a workflow"""
        node = pe.MapNode(niu.Function(input_names=['WRONG'],
                                       output_names=['newstring'],
                                       function=dummy_func),
                          iterfield=['WRONG'],
                          name='myfunc')
        node.inputs.WRONG = ['string{}'.format(i) for i in range(3)]
        wf = pe.Workflow('testmapnodecrash')
        wf.add_nodes([node])
        wf.base_dir = tmpdir.strpath
        with pytest.raises(RuntimeError):
>           wf.run(plugin='Linear')

/code/nipype/nipype/pipeline/engine/tests/test_utils.py:399: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/code/nipype/nipype/pipeline/engine/workflows.py:591: in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
/code/nipype/nipype/pipeline/plugins/linear.py:52: in run
    crashfile = report_crash(node)
/code/nipype/nipype/pipeline/plugins/tools.py:58: in report_crash
    savepkl(crashfile, dict(node=node, traceback=traceback))
/code/nipype/nipype/utils/filemanip.py:623: in savepkl
    pkl_file = gzip.open(filename, 'wb')
/home/mathias/miniconda2/envs/dev3/lib/python3.6/gzip.py:53: in open
    binary_file = GzipFile(filename, gz_mode, compresslevel)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <[AttributeError("'GzipFile' object has no attribute 'fileobj'") raised in repr()] GzipFile object at 0x7f7fe991fe10>
filename = '/code/nipype/crash-20171031-113736-mathias-myfunc-3723030d-f1e8-4a4a-8d5c-e72fefa81e12.pklz', mode = 'wb', compresslevel = 9
fileobj = None, mtime = None

    def __init__(self, filename=None, mode=None,
                 compresslevel=9, fileobj=None, mtime=None):
        """Constructor for the GzipFile class.
    
            At least one of fileobj and filename must be given a
            non-trivial value.
    
            The new class instance is based on fileobj, which can be a regular
            file, an io.BytesIO object, or any other object which simulates a file.
            It defaults to None, in which case filename is opened to provide
            a file object.
    
            When fileobj is not None, the filename argument is only used to be
            included in the gzip file header, which may include the original
            filename of the uncompressed file.  It defaults to the filename of
            fileobj, if discernible; otherwise, it defaults to the empty string,
            and in this case the original filename is not included in the header.
    
            The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', 'wb', 'x', or
            'xb' depending on whether the file will be read or written.  The default
            is the mode of fileobj if discernible; otherwise, the default is 'rb'.
            A mode of 'r' is equivalent to one of 'rb', and similarly for 'w' and
            'wb', 'a' and 'ab', and 'x' and 'xb'.
    
            The compresslevel argument is an integer from 0 to 9 controlling the
            level of compression; 1 is fastest and produces the least compression,
            and 9 is slowest and produces the most compression. 0 is no compression
            at all. The default is 9.
    
            The mtime argument is an optional numeric timestamp to be written
            to the last modification time field in the stream when compressing.
            If omitted or None, the current time is used.
    
            """
    
        if mode and ('t' in mode or 'U' in mode):
            raise ValueError("Invalid mode: {!r}".format(mode))
        if mode and 'b' not in mode:
            mode += 'b'
        if fileobj is None:
>           fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
E           PermissionError: [Errno 13] Permission denied: '/code/nipype/crash-20171031-113736-mathias-myfunc-3723030d-f1e8-4a4a-8d5c-e72fefa81e12.pklz'

/home/mathias/miniconda2/envs/dev3/lib/python3.6/gzip.py:163: PermissionError
---------------------------------------------------------------- Captured stdout call -----------------------------------------------------------------
171031-11:37:35,900 workflow INFO:
	 Workflow testmapnodecrash settings: ['check', 'execution', 'logging']
171031-11:37:35,901 workflow INFO:
	 Running serially.
171031-11:37:35,902 workflow INFO:
	 Executing node testmapnodecrash.myfunc in dir: /tmp/pytest-of-mathias/pytest-2/test_mapnode_crash30/testmapnodecrash/myfunc
171031-11:37:35,904 workflow INFO:
	 Executing node _myfunc0 in dir: /tmp/pytest-of-mathias/pytest-2/test_mapnode_crash30/testmapnodecrash/myfunc/mapflow/_myfunc0
171031-11:37:35,905 workflow INFO:
	 Running node "_myfunc0" ("nipype.interfaces.utility.wrappers.Function").
171031-11:37:36,183 workflow INFO:
	 Executing node _myfunc1 in dir: /tmp/pytest-of-mathias/pytest-2/test_mapnode_crash30/testmapnodecrash/myfunc/mapflow/_myfunc1
171031-11:37:36,185 workflow INFO:
	 Running node "_myfunc1" ("nipype.interfaces.utility.wrappers.Function").
171031-11:37:36,458 workflow INFO:
	 Executing node _myfunc2 in dir: /tmp/pytest-of-mathias/pytest-2/test_mapnode_crash30/testmapnodecrash/myfunc/mapflow/_myfunc2
171031-11:37:36,459 workflow INFO:
	 Running node "_myfunc2" ("nipype.interfaces.utility.wrappers.Function").
171031-11:37:36,735 workflow ERROR:
	 Node myfunc failed to run on host mathias-N501VW.
171031-11:37:36,735 workflow ERROR:
	 Saving crash info to /code/nipype/crash-20171031-113736-mathias-myfunc-3723030d-f1e8-4a4a-8d5c-e72fefa81e12.pklz
Traceback (most recent call last):
  File "/code/nipype/nipype/pipeline/plugins/linear.py", line 43, in run
    node.run(updatehash=updatehash)
  File "/code/nipype/nipype/pipeline/engine/nodes.py", line 407, in run
    self._run_interface()
  File "/code/nipype/nipype/pipeline/engine/nodes.py", line 1326, in _run_interface
    updatehash=updatehash))
  File "/code/nipype/nipype/pipeline/engine/nodes.py", line 1232, in _collate_results
    (self.name, '\n'.join(msg)))
Exception: Subnodes of node: myfunc failed:
Subnode 0 failed
Error:
dummy_func() got an unexpected keyword argument 'WRONG'
Subnode 1 failed
Error:
dummy_func() got an unexpected keyword argument 'WRONG'
Subnode 2 failed
Error:
dummy_func() got an unexpected keyword argument 'WRONG'

=============================================================== pytest-warning summary ================================================================
WI9 None could not create cache path /code/nipype/.cache/v/cache/lastfailed

@djarecka
Copy link
Collaborator Author

@mgxd : those errors are from tests that you not listed previously, it might mean that the fix works, but more tests require changes

@djarecka
Copy link
Collaborator Author

@mgxd but thanks for checking! I'll test it on my own, makes sense to just change the permission.

@djarecka
Copy link
Collaborator Author

djarecka commented Nov 1, 2017

I have still problem with one test in read-only systems, it's a well-known nipype/pipeline/engine/tests/test_utils.py::test_mapnode_crash3. I believe this is because nipype tries to write crashfile in the nipype directory. Is this really ok, that the default directory for crashfile is not cwl?

@satra
Copy link
Member

satra commented Nov 1, 2017

@djarecka - the location of where crash files are dumped can be changed via config.

@djarecka
Copy link
Collaborator Author

djarecka commented Nov 1, 2017

@satra I noticed that I can change wf.config["execution"]["crashdump_dir"], can change the directory in this one test. Was just wondering if this is expected behavior

@satra
Copy link
Member

satra commented Nov 1, 2017

@satra I noticed that I can change wf.config["execution"]["crashdump_dir"], can change the directory in this one test. Was just wondering if this is expected behavior

yes

@djarecka
Copy link
Collaborator Author

djarecka commented Nov 1, 2017

@satra ok, i change this one test. everything should work now in read-only systems

@djarecka
Copy link
Collaborator Author

djarecka commented Nov 1, 2017

i mean, except the one test that it's failing regardless of permissions... will debug it again today

…elated to tmpdir that is the same in fixtures and test functions, DataSInk was trying copy dir to subdir etc.)
Copy link
Member

@mgxd mgxd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - all tests passing in my readonly configuration

@mgxd mgxd merged commit 7a721b8 into nipy:master Nov 7, 2017
@@ -123,10 +123,10 @@ class Bunch(object):
--------
>>> from nipype.interfaces.base import Bunch
>>> inputs = Bunch(infile='subj.nii', fwhm=6.0, register_to_mean=True)
>>> inputs # doctest: +ALLOW_UNICODE
>>> inputs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove unicode?

Expected:
    Bunch(fwhm=6.0, infile='subj.nii', register_to_mean=True)
Got:
    Bunch(fwhm=6.0, infile=u'subj.nii', register_to_mean=True)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I set this in pytest.ini: https://github.com/nipy/nipype/blob/master/pytest.ini#L4

is it failing?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'd have to show me where it's failing. It works on my py2 and all tests for PR were also passing

'ls -al'

# Use get_traitsfree() to check all inputs set
>>> pprint.pprint(cli.inputs.get_traitsfree()) # doctest: +NORMALIZE_WHITESPACE +ALLOW_UNICODE
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should add these tags back in?

Expected:
    {'args': '-al',
     'environ': {'DISPLAY': ':1'},
     'ignore_exception': False}
Got:
    {'args': '-al', 'environ': {'DISPLAY': ':1'}, 'ignore_exception': False}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the same here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like +ALLOW_UNICODE did work right?

Copy link
Collaborator Author

@djarecka djarecka Nov 9, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oesteban @mgxd - so the way how Mathias is testing is different from what I always do:

conda create -n testnipype python=2 pip
source activate testnipype
conda install -y nipype
pip install -U https://github.com/nipy/nipype/archive/master.zip
python -c "import nipype; nipype.test()"

Indeed, if you test like this, pytest is testing nipype installed in the environment, i.e. ....../envs/testnipype/lib/python2.7/site-packages/nipype/interfaces/base.py and there is no pytest.ini to collect.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just wondering if the pytest.ini shouldn't be innipype/nipype. any thoughts?

@djarecka
Copy link
Collaborator Author

@oesteban , @mgxd : talked to @satra and will just modify nipype.test()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants