matflow.ElementResources#

class matflow.ElementResources(scratch=None, parallel_mode=None, num_cores=None, num_cores_per_node=None, num_threads=None, num_nodes=None, scheduler=None, shell=None, use_job_array=None, max_array_items=None, write_app_logs=False, combine_jobscript_std=<factory>, combine_scripts=None, time_limit=None, scheduler_args=<factory>, shell_args=<factory>, os_name=None, platform=None, CPU_arch=None, executable_extension=None, environments=None, resources_id=None, skip_downstream_on_failure=True, allow_failed_dependencies=False, SGE_parallel_env=None, SLURM_partition=None, SLURM_num_tasks=None, SLURM_num_tasks_per_node=None, SLURM_num_nodes=None, SLURM_num_cpus_per_task=None)#

Bases: ElementResources

The resources an element requires.

Note

This class is not typically instantiated by the user. It is instantiated when the ElementActionRun.resources and Jobscript.resources attributes are accessed, and when the ElementIteration.get_resources_obj method is called. It is common for most of these attributes to be unspecified. Many of them have complex interactions with each other.

Parameters:
  • scratch (str) – Which scratch space to use.

  • parallel_mode (ParallelMode) – Which parallel mode to use.

  • num_cores (int) – How many cores to request.

  • num_cores_per_node (int) – How many cores per compute node to request.

  • num_threads (int) – How many threads to request.

  • num_nodes (int) – How many compute nodes to request.

  • scheduler (str) – Which scheduler to use.

  • shell (str) – Which system shell to use.

  • use_job_array (bool) – Whether to use array jobs.

  • max_array_items (int) – If using array jobs, up to how many items should be in the job array.

  • write_app_logs (bool) – Whether an app log file should be written.

  • combine_jobscript_std (bool) – Whether jobscript standard output and error streams should be combined.

  • combine_scripts (bool) – Whether Python scripts should be combined.

  • time_limit (str) – How long to run for.

  • scheduler_args (dict[str, Any]) – Additional arguments to pass to the scheduler.

  • shell_args (dict[str, Any]) – Additional arguments to pass to the shell.

  • os_name (str) – Which OS to use.

  • platform (str) – System platform name, like “win”, “linux”, or “macos”.

  • CPU_arch (str) – CPU architecture, like “x86_64”, “AMD64”, or “arm64”.

  • executable_extension (str) – “.exe” on Windows, empty otherwise.

  • environments (dict) – Environment specifiers keyed by names.

  • resources_id (int) – An arbitrary integer that can be used to force multiple jobscripts.

  • skip_downstream_on_failure (bool) – Whether to skip downstream dependents on failure.

  • allow_failed_dependencies (int | float | bool | None) – The failure tolerance with respect to dependencies, specified as a number or proportion.

  • SGE_parallel_env (str) – Which SGE parallel environment to request.

  • SLURM_partition (str) – Which SLURM partition to request.

  • SLURM_num_tasks (str) – How many SLURM tasks to request.

  • SLURM_num_tasks_per_node (str) – How many SLURM tasks per compute node to request.

  • SLURM_num_nodes (str) – How many compute nodes to request.

  • SLURM_num_cpus_per_task (str) – How many CPU cores to ask for per SLURM task.

Methods

from_json_like

Make an instance of this class from JSON (or YAML) data.

get_default_CPU_arch

Get the default value for the CPU architecture.

get_default_executable_extension

Get the default value for the executable extension.

get_default_os_name

Get the default value for OS name.

get_default_platform

Get the default value for platform.

get_default_scheduler

Get the default value for scheduler.

get_default_shell

Get the default value for name.

get_env_instance_filterable_attributes

Get a tuple of resource attributes that are used to filter environment executable instances at submit- and run-time.

get_jobscript_hash

Get hash from all arguments that distinguish jobscripts.

set_defaults

Set defaults for unspecified values that need defaults.

to_dict

Serialize this object as a dictionary.

to_json_like

Serialize this object as an object structure that can be trivially converted to JSON.

validate_against_machine

Validate the values for os_name, shell and scheduler against those supported on this machine (as specified by the app configuration).

Attributes

CPU_arch

CPU architecture, like "x86_64", "AMD64", or "arm64"

SGE_parallel_env

Which SGE parallel environment to request.

SLURM_is_parallel

Returns True if any SLURM-specific arguments indicate a parallel job.

SLURM_num_cpus_per_task

How many CPU cores to ask for per SLURM task.

SLURM_num_nodes

How many compute nodes to request.

SLURM_num_tasks

How many SLURM tasks to request.

SLURM_num_tasks_per_node

How many SLURM tasks per compute node to request.

SLURM_partition

Which SLURM partition to request.

allow_failed_dependencies

The failure tolerance with respect to dependencies, specified as a number or proportion.

combine_scripts

Whether Python scripts should be combined.

environments

Environment specifiers keyed by names.

executable_extension

Typical extension used to indicate an executable file; ".exe" on Windows, empty on all other platforms.

is_parallel

Returns True if any scheduler-agnostic arguments indicate a parallel job.

max_array_items

If using array jobs, up to how many items should be in the job array.

num_cores

How many cores to request.

num_cores_per_node

How many cores per compute node to request.

num_nodes

How many compute nodes to request.

num_threads

How many threads to request.

os_name

Which OS to use.

parallel_mode

Which parallel mode to use.

platform

System platform name, like "win", "linux", or "macos"

resources_id

An arbitrary integer that can be used to force multiple jobscripts.

scheduler

Which scheduler to use.

scratch

Which scratch space to use.

shell

Which system shell to use.

skip_downstream_on_failure

Whether to skip downstream dependents on failure.

time_limit

How long to run for.

use_job_array

Whether to use array jobs.

write_app_logs

Whether an app log file should be written.

combine_jobscript_std

Whether jobscript standard output and error streams should be combined.

scheduler_args

Additional arguments to pass to the scheduler.

shell_args

Additional arguments to pass to the shell.

CPU_arch: str | None = None#

CPU architecture, like “x86_64”, “AMD64”, or “arm64”

SGE_parallel_env: str | None = None#

Which SGE parallel environment to request.

property SLURM_is_parallel: bool#

Returns True if any SLURM-specific arguments indicate a parallel job.

SLURM_num_cpus_per_task: int | None = None#

How many CPU cores to ask for per SLURM task.

SLURM_num_nodes: int | None = None#

How many compute nodes to request.

SLURM_num_tasks: int | None = None#

How many SLURM tasks to request.

SLURM_num_tasks_per_node: int | None = None#

How many SLURM tasks per compute node to request.

SLURM_partition: str | None = None#

Which SLURM partition to request.

allow_failed_dependencies: int | float | bool | None = False#

The failure tolerance with respect to dependencies, specified as a number or proportion.

combine_jobscript_std: bool#

Whether jobscript standard output and error streams should be combined.

combine_scripts: bool | None = None#

Whether Python scripts should be combined.

environments: dict[str, dict[str, Any]] | None = None#

Environment specifiers keyed by names.

executable_extension: str | None = None#

Typical extension used to indicate an executable file; “.exe” on Windows, empty on all other platforms.

classmethod from_json_like(json_like, shared_data=None)#

Make an instance of this class from JSON (or YAML) data.

Parameters:
  • json_like (str | Mapping[str, JSONed] | Sequence[Mapping[str, JSONed]] | None) – The data to deserialise.

  • shared_data (Mapping[str, ObjectList[JSONable]] | None) – Shared context data.

Return type:

The deserialised object.

classmethod get_default_CPU_arch()#

Get the default value for the CPU architecture.

Return type:

str

classmethod get_default_executable_extension()#

Get the default value for the executable extension.

Return type:

str

static get_default_os_name()#

Get the default value for OS name.

Return type:

str

classmethod get_default_platform()#

Get the default value for platform.

Return type:

str

classmethod get_default_scheduler(os_name, shell_name)#

Get the default value for scheduler.

Parameters:
  • os_name (str) –

  • shell_name (str) –

Return type:

str

classmethod get_default_shell()#

Get the default value for name.

Return type:

str

static get_env_instance_filterable_attributes()#

Get a tuple of resource attributes that are used to filter environment executable instances at submit- and run-time.

Return type:

tuple[str, …]

get_jobscript_hash()#

Get hash from all arguments that distinguish jobscripts.

Return type:

int

property is_parallel: bool#

Returns True if any scheduler-agnostic arguments indicate a parallel job.

max_array_items: int | None = None#

If using array jobs, up to how many items should be in the job array.

num_cores: int | None = None#

How many cores to request.

num_cores_per_node: int | None = None#

How many cores per compute node to request.

num_nodes: int | None = None#

How many compute nodes to request.

num_threads: int | None = None#

How many threads to request.

os_name: str | None = None#

Which OS to use.

parallel_mode: ParallelMode | None = None#

Which parallel mode to use.

platform: str | None = None#

System platform name, like “win”, “linux”, or “macos”

resources_id: int | None = None#

An arbitrary integer that can be used to force multiple jobscripts.

scheduler: str | None = None#

Which scheduler to use.

scheduler_args: dict[str, Any]#

Additional arguments to pass to the scheduler.

scratch: str | None = None#

Which scratch space to use.

set_defaults()#

Set defaults for unspecified values that need defaults.

shell: str | None = None#

Which system shell to use.

shell_args: dict[str, Any]#

Additional arguments to pass to the shell.

skip_downstream_on_failure: bool = True#

Whether to skip downstream dependents on failure.

time_limit: str | None = None#

How long to run for.

to_dict()#

Serialize this object as a dictionary.

Return type:

dict[str, Any]

to_json_like(dct=None, shared_data=None, exclude=(), path=None)#

Serialize this object as an object structure that can be trivially converted to JSON. Note that YAML can also be produced from the result of this method; it just requires a different final serialization step.

Parameters:
  • dct (dict[str, JSONable] | None) –

  • shared_data (_JSONDeserState) –

  • exclude (Container[str | None]) –

  • path (list | None) –

Return type:

tuple[JSONDocument, _JSONDeserState]

use_job_array: bool | None = None#

Whether to use array jobs.

validate_against_machine()#

Validate the values for os_name, shell and scheduler against those supported on this machine (as specified by the app configuration).

write_app_logs: bool = False#

Whether an app log file should be written.