1

I have one master node and two slave nodes.
One slave node connects successfully but one node connection failed.

Each node has 18.04 Ubuntu and 17.11 Slurm

If running to systemctl status slurmd.service

I receive this error:

slurmd.service - Slurm node daemon Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2019-10-15 15:28:22 KST; 22min ago Docs: man:slurmd(8) Process: 27335 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=1/FAILURE) Main PID: 75036 (code=exited, status=0/SUCCESS) Tasks: 1 (limit: 19660) CGroup: /system.slice/slurmd.service └─97690 /usr/sbin/slurmd -d /usr/sbin/slurmstepd

Oct 15 15:28:22 seok-System systemd[1]: Starting Slurm node daemon...
Oct 15 15:28:22 seok-System systemd[1]: slurmd.service: Control process exited, code=exited status=1
Oct 15 15:28:22 seok-System systemd[1]: slurmd.service: Failed with result 'exit-code'.
Oct 15 15:28:22 seok-System systemd[1]: Failed to start Slurm node daemon.

When I run slurmd -Dvvv I get the following output:

(null): log_init(): Unable to open logfile`/var/log/slurmd.log': Permission denied slurmd: debug: Log file re-opened slurmd: Message aggregation disabled slurmd: debug: init: Gres GPU plugin loaded slurmd: Gres Name=gpu Type=gtx1080ti Count=1 slurmd: Gres Name=gpu Type=gtx1080ti Count=1 slurmd: gpu device number 0(/dev/nvidia0):c 195:0 rwm slurmd: gpu device number 1(/dev/nvidia1):c 195:1 rwm slurmd: topology NONE plugin loaded slurmd: route default plugin loaded slurmd: debug2: Gathering cpu frequency information for 32 cpus slurmd: debug: Resource spec: No specialized cores configured by default on this node slurmd: debug: Resource spec: Reserved system memory limit not configured for this node slurmd: debug: Reading cgroup.conf file /etc/slurm/cgroup.conf slurmd: debug: Ignoring obsolete CgroupReleaseAgentDir option. slurmd: debug: Reading cgroup.conf file /etc/slurm/cgroup.conf slurmd: debug: Ignoring obsolete CgroupReleaseAgentDir option. slurmd: debug2: _file_write_content: unable to open '/sys/fs/cgroup/memory/memory.use_hierarchy' for writing : Permission denied slurmd: debug2: xcgroup_set_param: unable to set parameter 'memory.use_hierarchy' to '1' for '/sys/fs/cgroup/memory' slurmd: debug: task/cgroup/memory: total:128846M allowed:100%(enforced), swap:0%(permissive), max:100%(128846M) max+swap:100%(257692M) min:30M kmem:100%(128846M enforced) min:30M swappiness:0(unset) slurmd: debug: task/cgroup: now constraining jobs allocated memory slurmd: debug: task/cgroup: loaded slurmd: debug: Munge authentication plugin loaded slurmd: debug: spank: opening plugin stack /etc/slurm/plugstack.conf slurmd: Munge cryptographic signature plugin loaded slurmd: error: chmod(/var/spool/slurmd, 0755): Operation not permitted slurmd: error: Unable to initialize slurmd spooldir slurmd: error: slurmd initialization failed

Two nodes have same error but one node successfully gets slurmd access , one node failed

I check the munge, permission , etc but I don't know how to fix it?

and here is my slurm.conf:

# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=master
ControlAddr=ip.ip.ip.ip
#BackupController=
#BackupAddr=
#
AuthType=auth/munge
AuthInfo=/var/run/munge/munge.socket.2
#CheckpointType=checkpoint/none
CryptoType=crypto/munge
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobCheckpointDir=/var/slurm/checkpoint
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
PluginDir=/usr/lib/slurm
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/cgroup
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/spool/slurm-llnl
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/cgroup
TaskPluginParam=Sched
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SelectType=select/cons_res
SelectTypeParameters=CR_Core
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
DebugFlags=NO_CONF_HASH
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerType=job_container/none
#JobCompUser=
#JobContainerType=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
GresTypes=gpu
NodeName=node1 Gres=gpu:pascal:1  NodeAddr=ip.ip.ip.ip CPUs=32 State=UNKNOWN CoresPerSocket=8 ThreadsPerCore=2 RealMemory=48209
NodeName=node2 Gres=gpu:pascal:2  NodeAddr=ip.ip.ip.ip CPUs=32 State=UNKNOWN CoresPerSocket=16 ThreadsPerCore=2 RealMemory=128846
PartitionName=Test Nodes=node1 Default=YES MaxTime=INFINITE State=UP
PartitionName=Test Nodes=node2 Default=YES MaxTime=INFINITE State=UP


Edit

/var/spool permission is drwxr-xr-x 8 root root 4096 Oct 15 14:58 spool

/var/spool/slurmd permission is drwxr-xr-x 2 slurm slurm 4096 Oct 15 14:58 slurmd

I used to this command sudo chmod 777 /var/spool /var/spool/slurmd to changed the permissions but same error it doesn't work.


Edit

Here is my slurmd.log file:

 gpu device number 0(/dev/nvidia0):c 195:0 rwm
 gpu device number 1(/dev/nvidia1):c 195:1 rwm
 fatal: Unable to find slurmstepd file at /tmp/slurm-build/sbin/slurmstepd

I didn't touch the slurmstepd and where is set up that?

kenlukas
  • 2,886
  • 2
  • 14
  • 25
NAMENAME KANG
  • 21
  • 1
  • 4
  • 1
    I'd start by looking at the permissions of `/var/spool/` (and `/var/spool/slurmd` if that exists). – Gerald Schneider Oct 15 '19 at 08:04
  • @GeraldSchneider `/var/spool` permission is `root` `/var/spool/slurmd` is exist and permission is `slurm` – NAMENAME KANG Oct 15 '19 at 08:14
  • That's not the permissions, that's the owner. That's also important, but it isn't all. Please add the output of `ls -l` for each file and directory that is mentioned in your log output to your question. And please edit your question, don't add further information in comments. The output of commands is unreadable here. – Gerald Schneider Oct 15 '19 at 08:28

0 Answers0