Is it possible that there is some other job that is writing to the same
file? If you are bulk scheduling jobs, is it possible that you are
accidentally scheduling 2 jobs that are writing to the same directory?
On Tue, 2015-08-25 at 08:44 -0400, Karro, John wrote:
> Can anyone explain to me the following OS errors occurring
> sporadically on Redhawk, as returned by my Python code:
>
>
> Traceback (most recent call last):
> File "consensus_seq.py", line 84, in <module>
> main(args.seq,args.elements,args.output,args.fa_output)
> File "consensus_seq.py", line 45, in main
> wp = open(output, "w")
> OSError: [Errno 116] Stale NFS file handle:
> 'SEEDS1/PHRAIDER/ce10.chrV.s2.f3.consensus.txt'
>
>
> I'm running batches of jobs, and this seems to pop up every-once in a
> while and kill my pipeline. The directory does exist. And if I rerun
> the program (many hours later) it works fine.
>
> Any idea why this might happen?
>
> John
>
>
>
> ----------------------------------------------------------------------------------------------
>
>
> Dr. John Karro, Associate Professor
> Department of Computer Science and Software Engineering
> Affiliate: Department of Microbiology, Department of Statistics
> Office: Benton 205D, Miami University, Oxford, Ohio
> ----------------------------------------------------------------------------------------------
|