Skip to content

creating a global module for shpc use not working with shpc config inituser #693

@shahzebsiddiqui

Description

@shahzebsiddiqui

Hi @vsoch ,

i am trying to setup an shpc modulefile that can be used for by everyone on HPC cluster, here is a sample shpc modulefile that i wrote up. I have an installation of shpc that i was able to get setup. I wanted everyone to have a shpc settings so i figured out the logic to get the user setting created with shpc config inituser. The views directory is a bug for whatever when running shpc view list it doesnt work by default so i had to figure out this logic somehow in the modulefile so users dont have to do this. It would be nice if this bug was fixed

(.pyenv) [shahzebsiddiqui93358008@login ~]$ cat /camber/home/tools/modules/shpc/0.1.32.lua
-- -*- lua -*-
-- Modulefile for singularity-hpc (shpc)

-- Define module name and version
local name = myModuleName()
local version = myModuleVersion()

whatis("Name: " .. name)
whatis("Version: " .. version)
whatis("Category: Container, HPC")
whatis("Description: Singularity-HPC (shpc) for managing containerized applications")
whatis("URL: https://github.com/singularityhub/singularity-hpc")

-- Installation path using myModuleName and myModuleVersion
local root = pathJoin("/camber/home/tools/apps", name, version, "singularity-hpc")

-- Activate the Python virtual environment
execute{cmd="source " .. pathJoin(root, ".pyenv/bin/activate"), modeA={"load"}}


-- Check for ~/.singularity-hpc/settings.yml and initialize if it doesn't exist
local user_shpc_config = pathJoin(os.getenv("HOME"),".singularity-hpc","settings.yml")
if not isFile(user_shpc_config) then
	execute{cmd="shpc config inituser", modeA={"load"}}
end


local view_dir = pathJoin(os.getenv("HOME"),"shpc","views")
if not isDir(view_dir) then
	execute{cmd="mkdir -p " .. view_dir, modeA={"load"}}
end


execute{cmd="shpc config set default_view apps 2>/dev/null", modeA={"load"}}
if not isDir(pathJoin(view_dir,"apps")) then
	execute{cmd="shpc view create apps", modeA={"load"}}
end
-- Deactivate the Python virtual environment on unload
execute{cmd="deactivate", modeA={"unload"}}

-- Set environment variable for shpc home (optional, if needed)
setenv("SHPC_HOME", root)

The thing is that when i run shpc config inituser it uses $root instead of $HOME/shpc or something i dont know why its doing that, i think i had it working once. Here is me when i try to run the script manually

(.pyenv) [shahzebsiddiqui80981545@login shpc]$ shpc config inituser
Created user settings file /camber/home/shahzebsiddiqui80981545/.singularity-hpc/settings.yml
(.pyenv) [shahzebsiddiqui80981545@login shpc]$ cat ~/.singularity-hpc/settings.yml
# The Singularity HPC settings file holds basic information about the
# module system to use, and where modules are stored. All settings are
# currently specific to singularity, but can be extended to other technologies.

# set a default module system
# Currently lmod and tcl are supported. To request an additional system,
# please open an issue https://github.com/singularityhub/singularity-hpc
module_sys: lmod

# config editor
config_editor: vim

# set a default container technology (singularity or podman)
container_tech: singularity

# Registry Recipes (order of GitHub providers or paths here is honored for search path))
# To reference a path in the $PWD of the shpc install use $root_dir (e.g., $root_dir/registry)
# Please preserve the flat list format for the yaml loader
registry: [https://github.com/singularityhub/shpc-registry]

# Registry to sync from (only to a filesystem registry supported)
sync_registry: https://github.com/singularityhub/shpc-registry

# Lmod or Environment Modules settings
# The install directory for modules. Defaults to the install directory/modules
module_base: $root_dir/modules

# Format string for module commands exec,run,shell (not aliases) (can also include:
# {{parsed_name.registry}} , {{ parsed_name.repository }},
# {{parsed_name.tool}}, {{parsed_name.version}}
# This is where you might add a prefix to your module names, if desired.
module_name: '{{ parsed_name.tool }}'

# When multiple versions are available and none requested, allow module picking one itself
# module_sys: allow the module software to decide
# null: do nothing, versions will be specified manually
# last_installed: use the last installed as the default
# first_installed: use the first installed
default_version: module_sys

# store containers separately from module files
# It's recommended to do this for faster loading
container_base: $root_dir/containers

# When parsing labels, replace newlines with this string
label_separator: ', '

wrapper_base: $root_dir/modules

# Default root directory to create views
views_base: $root_dir/views

# Always install to a default "active" view (null means we don't)
default_view:

# if defined, add to lmod script to load this Singularity module first
singularity_module:
podman_module:

# Path to (or name of) the singularity executable to use in wrapper scripts. Can be used instead of / with singularity_module
singularity_path: singularity

# string with comma separated list of paths to binds. If set, expored to SINGULARITY_BINDPATH
bindpaths:

# for container technologies that can specify a tty input (e.g., -t)
enable_tty: true

# exported to SINGULARITY_SHELL, defaults to /bin/bash.
singularity_shell: /bin/sh
podman_shell: /bin/sh
docker_shell: /bin/sh

# shell for test.sh file
test_shell: /bin/bash

# shell for wrapper shell scripts
wrapper_shell: /bin/bash

# Wrapper scripts to use for aliases (replaces them)
# Put a fullpath here for a custom script, otherwise we look in shpc/main/wrappers
# If you want a custom wrapper script for a one-off container, define it there with the script
# stored alongside the container.yaml.
wrapper_scripts:
  enabled: true

  # use for docker aliases (set to null to disable)
  docker: docker.sh

  # use for podman aliases (set to null to disable)
  podman: docker.sh

  # use for singularity aliases (set to null to disable)
  singularity: singularity.sh

  # Add an extra custom template directory (searched first)
  templates:

# the module namespace you want to install from. E.g., if you have ghcr.io/autamus/clingo
# and you set the namespace to ghcr.io/autamus, you can just do: shpc install clingo.
namespace:

# THe name of the environment file to bind to the container.
# This determines order of load
environment_file: 99-shpc.sh

# container features like gpu will modify generated recipes
container_features:
  gpu:      # one of null, amd, or nvidia
  x11:      # one of null, true, false, or a path
            # defaults to ~/.Xauthority if set to true and the container has x11: true
  home:     # one of null, or a single path or src:dest path.
            # home: true in a container.yaml will use this path, if defines

If i was just running the module load it will fail on creating apps directory and its resolving to incorrect directory path because its not using my own local copy of settings

(.pyenv) [shahzebsiddiqui80981545@login shpc]$ ml -shpc
[shahzebsiddiqui80981545@login shpc]$ ml shpc
Error creating path /camber/home/tools/apps/shpc/0.1.32/singularity-hpc/views/apps, exiting.

I am currently running the following version

(.pyenv) [shahzebsiddiqui80981545@login shpc]$ shpc --version
0.1.32

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions