RESCOMP Archives

June 2006

RESCOMP@LISTSERV.MIAMIOH.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Research Computing Support <[log in to unmask]>, Robin <[log in to unmask]>
Date:
Mon, 12 Jun 2006 12:05:04 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (48 lines)
probably not an IBRIX problem.

I've added yet another job clean up utility.

We have mpiexec (for mpiexec). Also, epilog for the mpirun_ssh (that  
has been in place).
The epilog will kill any jobs that have the right PBS_JOBID  
associated w/ it.

I added the 3rd one: an hourly cron that will check for user  
processes and if there are processes and no associated PBS job schedule,
the processes will be killed. Its actions will be logged at /ibrixfs1/ 
cleanuplog/ (for future review). The script is at /etc/cron.hourly/ 
cleanall.sh.
This will take care of users who logging in and abuse resources at  
the compute nodes (though they can hang around for at most an hour).

There are no universal clean up proceses: Matlab MPI left over  
processes will be cleaned up by the latest addition.

Thanks,
Robin


On Jun 12, 2006, at 11:11 AM, jaime combariza wrote:

> I am seeing several problems wen I try to run matlabMPI.
>
> here is one that I am thinking it may have to do with the file  
> system. it
> does not happen always at the same process:
>
> fatal error in svComputeFullPathToExe: getcwd(cwd) error: No such  
> file or d
> irectory
>
>
> Some how the CWD is not found, thus it can not find the files it  
> needs to execute.
>
>
>
> Jaime E. Combariza, Ph.D.
> Assistant Director Research Computing
> http://www.muohio.edu/researchcomputing
> Miami University
> (513) 529-5080

ATOM RSS1 RSS2