RUFAS.output_manager module#

class RUFAS.output_manager.LogVerbosity(*values)#

Bases: Enum

The different types of logs printed by Output Manager. Set by the verbose gnu arg in main.py.

Attributes#

NONEstr

Selecting NONE will tell OutputManager not to print out anything during a simulation.

CREDITSstr

Selecting CREDITS will tell OutputManager to print out the credits.

ERRORSstr

Selecting ERRORS will tell OutputManager to print out all credits and errors added during a simulation.

WARNINGSstr

Selecting WARNINGS will tell OutputManager to print out the credits as well as warnings and errors added during a simulation.

LOGSstr

Selecting LOGS will tell OutputManager to print out the credits as well as logs, warnings, and errors added during a simulation.

Notes#

CREDITS is the default setting.

NONE = 'none'#
CREDITS = 'credits'#
ERRORS = 'errors'#
WARNINGS = 'warnings'#
LOGS = 'logs'#
class RUFAS.output_manager.OriginLabel(*values)#

Bases: Enum

An enumeration representing the different labels for data origins when generating json output files.

Attributes#

TRUE_AND_REPORT_ORIGINSstr

Indicates that both the true origin and report origin should be included.

TRUE_ORIGINstr

Indicates that only the true origin should be included.

REPORT_ORIGINstr

Indicates that only the report origin should be included.

NONEstr

Indicates that no origin information should be included.

TRUE_AND_REPORT_ORIGINS = 'true and report origins'#
TRUE_ORIGIN = 'true origin'#
REPORT_ORIGIN = 'report origin'#
NONE = 'none'#
class RUFAS.output_manager.OutputManager#

Bases: object

Output manager for RuFaS simulation results. Works by collecting variables, logs, warnings, and errors into separate pools, and populates requested output channels from the pools once the simulation is done.

OutputManager is singleton, i.e., only one instance of it can exist. After the first instance is created, future calls to the constructor method returns the first instance. Also, the initializer method only works once.

Class Attributes#

pool_element_typedict[str, list[Any]]

Type alias for the pool elements

JSON_OUTPUT_MAX_RECURSIVE_DEPTHint

Maximum depth for recursive serialization in JSON output files (default: 4)

Attributes#

variables_pooldict[str, dict[str, list[dict[str, Any]]]

Contains variables reported to the output manager

warnings_pooldict[str, dict[str, list[dict[str, Any]]]

Contains warnings reported to the output manager

errors_pooldict[str, dict[str, list[dict[str, Any]]]

Contains errors reported to the output manager

logs_pooldict[str, dict[str, list[dict[str, Any]]]

Contains logs reported to the output manager

timeRufasTime

A RufasTime object used to track the simulation time

_exclude_info_maps_flagbool

Set to True to exclude info_maps when adding variables to the variables_pool

_variables_usage_counterCounter[str]

A Counter object used to keep track of the number of times a variables in the variables_pool is used.

is_end_to_end_testing_runbool, default False

Indicates if end-to-end testing is being run.

is_first_post_processingbool, default True

True if post-processing (i.e. filtering and saving variables) has not occurred yet. This variable is used during end-to-end testing to manage which filters are used during different post-processing runs.

chunkificationbool

Set to True to enable chunkification of the output variable pool.

saved_pool_chunks_numint

The number of saved pool chunks.

saved_pool_chunks_pathPath | None

The path to the directory where saved pool chunks are stored.

available_memoryint

The available memory on the system.

average_add_variable_call_additionint, default 118

The average memory usage increase per call to add_variable.

add_variable_callint

The number of calls to add_variable().

save_chunk_threshold_call_countint

The threshold add_variable_call count for saving pool chunk.

current_pool_sizeint

The current size of the variables pool.

maximum_pool_sizefloat

The maximum allowed variable pool size.

__instance = <RUFAS.output_manager.OutputManager object>#
pool_element_type#

alias of dict[str, list[Any]]

JSON_OUTPUT_MAX_RECURSIVE_DEPTH = 4#
__init__() None#
property _filter_prefixes: dict[str, str]#

Returns the appropriate set of acceptable filter prefixes.

setup_pool_overflow_control(output_dir: Path, max_memory_usage_percent: int, max_memory_usage: int | None = None, save_chunk_threshold_call_count: int | None = None) None#

Sets up the mechanism by which chunkification of the output variable pool is controlled.

Parameters#

output_dirPath

The path to the output directory where chunks will be saved.

max_memory_usage_percentint

The setting for the maximum output variable pool size as a percentage of the available memory.

max_memory_usageint | None, optional

The setting for the maximum output variable pool size in bytes.

save_chunk_threshold_call_countint | None, optional

The setting for the threshold add_variable_call count for saving pool chunk.

_pool_element_factory() pool_element_type#

Factory for elements added to pools

_add_to_pool(pool: dict[str, pool_element_type], key: str, value: Any, info_map: dict[str, Any], first_info_map_only: bool = False) None#

Adds value and info map at key in the given pool.

Parameters#

pooldict[str, dict[str, list[dict[str, Any]]]

The pool to add the value and info_map to.

keystr

The key to add the value and info_map at.

valueAny

The value to be added to the pool.

info_mapdict[str, Any]

The info map to be added to the pool.

first_info_map_onlybool, default False

If true, records only the first info map passed for that variable. If false, records all info maps passed for that variable.

add_variable(name: str, value: Any, info_map: dict[str, Any], first_info_map_only: bool = False) None#

Adds a variable to the pool.

Parameters#

namestr

The name of the variable

valueAny

The value of the variable

info_mapdict[str, Any]

Additional args, some are non-optional

info_map[“class”]str

The name of the class which called this function

info_map[“function”]str

The name of the function which called this function

info_map[“prefix”]str, optional

If present, overrides the automated prefix

info_map[“suppress_prefix”]bool, optional

If present and True, suppresses the automated prefix generation. Has no effect on manual prefix overrides.

info_map[“suffix”]str, optional

If present, gets appended to the key

first_info_map_onlybool, default False

If true, records only the first info map passed for that variable. If false, records all info maps passed for that variable.

add_variable_bulk(variables: list[tuple[dict[str, Any], dict[str, Any]]], first_info_map_only: bool = False) None#

Iterate through all variables and call add_variable() on each of them.

Parameters#

variableslist[tuple[dict[str, Any]]

Variables to add in bulk packages in a list of tuples. Each tuple contains a dictionary with the key being the variable name and the value being the output value, and its corresponding info map.

first_info_map_onlybool, default False

If true, records only the first info_map passed for each variable.

_save_current_variable_pool() None#

Save the current variable pool into JSON file. Flush the variable pool and reset the pool size.

_stringify_units(units: dict[str, Any] | MeasurementUnits) dict[str, Any] | str#

Recursively validates that units is either a valid MeasurementUnits enum member or a dictionary with valid MeasurementUnits enum members (including nested dictionaries). Converts the MeasurementUnits enum values to their string representations.

Parameters#

unitsdict[str, Any] | str

Either a string that can be converted to an MeasurementUnits, or a dictionary mapping string keys to either MeasurementUnits values or further dictionaries.

Returns#

dict[str, Any] | str

The validated and stringified units.

Raises#

TypeError

If any unit or nested unit does not have the type MeasurementUnits.

add_log(name: str, msg: str, info_map: dict[str, Any]) None#

Adds a log message to the pool of logs.

Parameters#

namestr

The name of the log

msgstr

The log message to be added to the pool

info_map: dict[str, Any]

Additional args to be logged, some are non-optional

info_map[“class”]str

The name of the class which called this function

info_map[“function”]str

The name of the function which called this function

info_map[“prefix”]str, optional

If present, overrides the automated prefix

info_map[“suppress_prefix”]bool, optional

If present and True, suppresses the automated prefix generation. Has no effect on manual prefix overrides.

info_map[“suffix”]str, optional

If present, gets appended to the key

add_warning(name: str, msg: str, info_map: dict[str, Any]) None#

Adds a warning message to the pool of warnings.

Parameters#

namestr

The name of the warning

msgstr

The warning message to be added to the pool

info_map: dict[str, Any]

Additional args to be logged, some are non-optional

info_map[“class”]str

The name of the class which called this function

info_map[“function”]str

The name of the function which called this function

info_map[“prefix”]str, optional

If present, overrides the automated prefix

info_map[“suppress_prefix”]bool, optional

If present and True, suppresses the automated prefix generation. Has no effect on manual prefix overrides.

info_map[“suffix”]str, optional

If present, gets appended to the key

add_error(name: str, msg: str, info_map: dict[str, Any]) None#

Adds an error message to the pool of errors.

Parameters#

namestr

The name of the error

msgstr

The error message to be added to the pool

info_map: dict[str, Any]

Additional args to be logged, some are non-optional

info_map[“class”]str

The name of the class which called this function

info_map[“function”]str

The name of the function which called this function

info_map[“prefix”]str, optional

If present, overrides the automated prefix

info_map[“suppress_prefix”]bool, optional

If present and True, suppresses the automated prefix generation. Has no effect on manual prefix overrides.

info_map[“suffix”]str, optional

If present, gets appended to the key

_handle_log_output(name: str, msg: str, info_map: dict[str, Any], log_level: LogVerbosity) None#

Formats log output based on log_level.

Parameters#

namestr

The name of the log.

msgstr

The log message to be added to the pool.

info_mapdict[str, Any]

Additional args to be logged.

log_levelLogVerbosity

The LogVerbosity level.

set_metadata_prefix(metadata_prefix: str) None#

Sets the metadata_prefix attribute.

set_log_verbose(log_verbose: LogVerbosity = LogVerbosity.CREDITS) None#

Sets the __log_verbose attribute

_generate_key(name: str, info_map: dict[str, str | bool]) str#

Generates key for the pool. See “add_variable” docs for detailed arg description.

Raises#

KeyError

If either info_map[“class”] or info_map[“function”] are not present.

_get_prefix(caller_class: str, caller_function: str) str#

Returns the prefix for a key in the pool.

Parameters#

caller_classstr

Name of the class in which the call to output manager is originated

functionstr

Name of the function which called the output manager originated

Returns#

str

{caller_class}.{caller_function}

_write_disclaimer(file_pointer: TextIO) None#

Writes the predefined disclaimer message to a given file.

Parameters#

file_pointer: TextIO

A file-like object (supporting the .write() method) that points to the file where the disclaimer should be written.

Example#

>>> output_manager = OutputManager()
>>> import io
>>> file_like_string = io.StringIO()
>>> output_manager._write_disclaimer(file_like_string)
>>> assert file_like_string.getvalue() == DISCLAIMER_MESSAGE + "\n"
dict_to_file_json(data_dict: dict[str, Any], path: Path, minify_output_file: bool = False, origin_label: OriginLabel = OriginLabel.NONE) None#

Saves a dictionary into a JSON file

Parameters#

data_dictdict[str, Any]

The dictionary to be saved

pathPath

The path to the file to be saved

minify_output_filebool

Boolean flag indicating whether to minify the output JSON file.

origin_labelOriginLabel, default OriginLabel.NONE

The origin label specifying the format of the detailed values string.

Raises#

Exception

If an error occurs while saving to the file.

Notes#

The dictionary is first converted to a serializable format using Utility.make_serializable().

The file is saved with no indentation.

If you want to save time and space, limit the maximum depth of the serialized dictionary using the max_depth parameter. You can also set the minify_output_file flag to True to minimize the output JSON file size.

_add_detailed_values(data_dict: dict[str, Any], origin_label: OriginLabel) dict[str, Any]#

Adds a detailed_values list to each sub-dictionary to replace the original values list.

Parameters#

data_dictdict[str, Any]

The input dictionary containing keys that may map to other dictionaries with info_maps and values keys. info_maps should contain a list of dictionaries, each with a data_origin key indicating the source of the data. values should contain a list of values corresponding to these origins.

origin_labelOriginLabel

The origin label specifying the format of the detailed values string.

Returns#

dict[str, Any]

The modified dictionary with a detailed_values list added to each sub-dictionary that meets the criteria. This list provides detailed information on the origins and units of each value.

Notes#

When the OriginLabel is set to anything other than NONE, this method iterates over each key in the provided dictionary, and it will create a detailed_values list that integrates the data origins, values, and units. Depending on the origin_label parameter, the format of the detailed values will vary:

  • If origin_label is OriginLabel.TRUE_AND_REPORT_ORIGINS, the format is: “[true_origin_class.true_origin_function]->[report_origin]: value (units)” or “[true_origin_class.true_origin_function]->[report_origin]: subkey1 = value1 (units1),

    subkey2 = value2 (units2), …” if the value is a dictionary.

  • If origin_label is OriginLabel.TRUE_ORIGIN, the format is: “[true_origin_class.true_origin_function]: value (units)” or “[true_origin_class.true_origin_function]: subkey1 = value1 (units1), subkey2 = value2 (units2), …” if the value is a dictionary.

  • If origin_label is OriginLabel.REPORT_ORIGIN, the format is: “[report_origin]: value (units)” or “[report_origin]: subkey1 = value1 (units1), subkey2 = value2 (units2), …” if the value is a dictionary.

  • If origin_label is OriginLabel.NONE, there will be no detailed_values information added.

Examples#

>>> example_data_dict = {
...     "AnimalModuleReporter.report_daily_animal_population.num_animals": {
...         "info_maps": [
...             {"data_origin": [["AnimalManager", "daily_updates"]], "units": "animals"},
...             {"data_origin": [["AnimalManager", "daily_updates"]], "units": "animals"}
...         ],
...         "values": [193, 194]
...     },
...     "WeatherModuleReporter.report_daily_weather.temperature": {
...         "info_maps": [
...             {"data_origin": [["WeatherManager", "daily_temperature"]],
...              "units": {"avg": "°C", "min": "°C", "max": "°C"}},
...             {"data_origin": [["WeatherManager", "daily_temperature"]],
...              "units": {"avg": "°C", "min": "°C", "max": "°C"}}
...         ],
...         "values": [
...             {"avg": 25.5, "min": 18.2, "max": 32.1},
...             {"avg": 26.1, "min": 19.7, "max": 33.4}
...         ]
...     }
... }
>>> output_manager = OutputManager()
>>> modified_data_dict = output_manager._add_detailed_values(
...     example_data_dict, OriginLabel.TRUE_AND_REPORT_ORIGINS
... )
>>> assert modified_data_dict[
...     "AnimalModuleReporter.report_daily_animal_population.num_animals"]["detailed_values"
... ] == [
...    "[AnimalManager.daily_updates]->[AnimalModuleReporter.report_daily_animal_population.num_animals]: "
...    "193 (animals)",
...    "[AnimalManager.daily_updates]->[AnimalModuleReporter.report_daily_animal_population.num_animals]: "
...    "194 (animals)"
... ]
>>> assert modified_data_dict[
...     "WeatherModuleReporter.report_daily_weather.temperature"]["detailed_values"
... ] == [
...    "[WeatherManager.daily_temperature]->[WeatherModuleReporter.report_daily_weather.temperature]: "
...    "avg = 25.5 (°C), min = 18.2 (°C), max = 32.1 (°C)",
...    "[WeatherManager.daily_temperature]->[WeatherModuleReporter.report_daily_weather.temperature]: "
...    "avg = 26.1 (°C), min = 19.7 (°C), max = 33.4 (°C)"
... ]
_format_detailed_value_str(origin_label: OriginLabel, data: dict[str, Any]) str#

Formats the detailed values string based on the provided origin label and data.

Parameters#

origin_labelOriginLabel

The origin label specifying the format of the detailed values string.

datadict[str, Any]

A dictionary containing the necessary data for formatting the detailed values string. It should have the following keys: - “true_origin_class”: The class name of the true origin. - “true_origin_function”: The function name of the true origin. - “report_origin”: The report origin which already includes the class and function names. - “value”: The value associated with the origin. - “units”: The units associated with the value.

Returns#

str

The formatted detailed values string based on the provided origin label and data.

Notes#

The format of the detailed values string depends on the origin_label parameter: - If origin_label is OriginLabel.TRUE_AND_REPORT_ORIGINS, the format is:

“[true_origin_class.true_origin_function]->[report_origin]: value (units)” or “[true_origin_class.true_origin_function]->[report_origin]: subkey1 = value1 (units1),

subkey2 = value2 (units2), …” if the value is a dictionary.

  • If origin_label is OriginLabel.TRUE_ORIGIN, the format is: “[true_origin_class.true_origin_function]: value (units)” or “[true_origin_class.true_origin_function]: subkey1 = value1 (units1), subkey2 = value2 (units2), …” if the value is a dictionary.

  • If origin_label is OriginLabel.REPORT_ORIGIN, the format is: “[report_origin]: value (units)” or “[report_origin]: subkey1 = value1 (units1), subkey2 = value2 (units2), …” if the value is a dictionary.

  • If origin_label is OriginLabel.NONE, there will be no detailed_values information so no formatting will

occur.

_can_add_detailed_values(sub_data_dict: dict[str, Any]) bool#

Checks if the provided sub_data_dict has the necessary structure and data to add detailed values.

The sub_data_dict should meet the following requirements: - It must be a dictionary. - It must contain the keys “info_maps” and “values”. - The length of the “info_maps” list and the “values” list must be equal.

Notes#

The sub_data_dict should meet the following requirements: - It must be a dictionary. - It must contain the keys “info_maps” and “values”. - The length of the “info_maps” list and the “values” list must be equal.

Parameters#

sub_data_dictdict[str, Any]

The dictionary to check for compatibility with adding detailed values.

Returns#

bool

True if the sub_data_dict meets the requirements for adding detailed values, False otherwise.

_dict_to_csv_column_list(variable_name: str, data_dict: dict[str, list[Any]]) list[Series]#

Turns a dictionary to a list of csv columns.

Parameters#

variable_namestr

The name of the variable having its values written into a CSV column.

data_dictdict[str, list[Any]]

The dictionary to read from

Returns#

list[pd.Series]

A list of (column_name, column_data) tuples.

_get_units_substr(variable_name: str, units: str | dict[str, str] | None, subkey: str | None = None) str#

Get the units substring for a column title.

Parameters#

variable_namestr

The name of the variable or group of variables associated with the units.

unitsstr | dict[str, str] | None

The units associated with the data.

subkeystr | None, optional

The subkey to retrieve the units for, if units is a dictionary. Default is None.

Returns#

str

The formatted units substring for the column title.

Examples#

>>> output_manager = OutputManager()
>>> output_manager._get_units_substr("temperature", "C")
' (C)'
>>> output_manager._get_units_substr("velocity", {"magnitude": "m/s", "direction": "degrees"}, "magnitude")
' (m/s)'
>>> output_manager._get_units_substr("velocity", {"magnitude": "m/s", "direction": "degrees"}, "direction")
' (degrees)'
>>> output_manager._get_units_substr("coordinates", {"x": "m", "y": "m"})
''
_dict_to_file_csv(data_dict: dict[str, Any], path: Path, direction: str | None = 'portrait') None#

Saves a dictionary to a csv file.

Parameters#

data_dictdict[str, Any]

The dictionary to be saved.

pathPath

The path to the file to be saved.

directionstr | None

The direction of the csv file, either portrait or landscape, default if portrait. If None is provided, the file will be saved in default portrait orientation.

_list_to_file_txt(data_list: list[str], path: Path) None#

Saves a list into a text file

Parameters#

data_listlist[str]

The list of variable names to be saved

pathstr

The path to the file to be saved

Raises#

Exception

If an error occurs while saving to the file.

generate_file_name(base_name: str, extension: str, include_millis: bool = False) str#

Returns a file name using the given base_name and timestamp.

_exclude_info_maps(pool: dict[str, pool_element_type]) dict[str, pool_element_type]#

Makes a copy of the given pool and removes info_maps from it.

Returns#

dict[str, OutputManager.pool_element_type]

A copy of the given pool with info_maps removed from it.

_list_filter_files_in_dir(dir_path: Path) list[str]#

Returns the list of supported filter files in the given path

_load_filter_file_content(path: Path) tuple[list[dict[str, str | int]], str | None]#

Loads and processes the content of a filter file from the specified path.

Parameters#

pathPath

The path to the filter file (either .json or .txt).

Returns#

tuple[list[dict[str, str|int]], str]

A tuple of: - A list of dictionaries, each containing the loaded filter content, with keys and values depending on the file type. - A string representing the output CSV direction, either “portrait” or “landscape”. If not direction is specified, an empty string “” is returned.

Raises#

FileNotFoundError

If the specified file does not exist.

json.JSONDecodeError

If there is an issue with parsing a JSON file.

UnicodeDecodeError

If there is an issue with decoding a text file.

Exception

If an unsupported file format is encountered; only .json and .txt are supported.

Notes#

This method attempts to open and process a filter file located at the specified path. It supports two file formats: JSON and plain text (.txt). If the file is a JSON file, it loads the JSON content into a dictionary. If the file is a .txt file, it reads the lines and creates a dictionary with a “filters” key and a list of filter elements as values. Unsupported file formats will raise an exception.

This method is used to handle loading filter content from external files, which are used to define filtering criteria for the variables pool.

filter_variables_pool(filter_content: dict[str, Any]) dict[str, pool_element_type]#

Returns a filtered variables pool based on options specified in filter_content.

Parameters#

filter_contentdict[str, Any]

A dictionary that contains filtering options.

Returns#

dict[str, OutputManager.pool_element_type]

A filtered variables pool based on either inclusion or exclusion.

_parse_filtered_variables(filtered_pool: dict[str, dict[str, list[Any]]], selected_variables: list[str] | None, filter_name: str, use_filter_name: bool, filter_by_exclusion: bool) dict[str, dict[str, list[Any]]]#

Unpacks and counts variables that have been filtered out of the Output Manager’s variables pool.

Parameters#

filtered_pooldict[str, OutputManager.pool_element_type]

Variables that have been filtered out of the Output Manager’s pool.

selected_variableslist[str] | None

list of key names to select or exclude from variables containing dictionaries.

filter_namestr

Name of the filter used to collect variables for the filtered pool.

use_filter_namebool

Whether to use the filter name when constructing the key name for data pulled from a dictionary.

filter_by_exclusionbool

Whether keys in dictionaries should be filtered by exclusion.

Returns#

dict[str, OutputManager.pool_element_type]

Dictionary containing data from the filtered pool of data, with data from within dictionaries unpacked and separated.

_sort_saved_chunk_files() list[Path]#

Get a list of all saved chunks of the output variable pool by retrieving all JSON files under the saved_pool_chunks_path. Then sort the files according to their file name to preserve the order.

load_saved_pools() None#

Filters saved pools of data by applying specific filter criteria.

This method iterates over JSON files in the saved pool directory. It then loads each file as the OutputManager variable pool and applies the filter by calling the filter_variables_pool() method. The results are aggregated into a single dictionary, combining entries under the same key by extending lists of info_maps and values.

Notes#

This function has a side effect that modifies the variable_pool of the OutputManager

save_results(filters_dir_path: Path, exclude_info_maps: bool, produce_graphics: bool, report_dir: Path, graphics_dir: Path, csv_dir: Path, json_dir: Path) None#

Parses the filter files in the given directory and saves the results to the given path.

Notes#

The filter files can be used to generate different output formats such as JSON, CSV, and graphical output.

Parameters#

filters_dir_pathPath

Path of the directory containing the files containing the keys for filtering.

exclude_info_mapsbool

Flag for whether or not the user wants to include info_maps data in their results files.

produce_graphics: bool

Flag for whether or not the user wants to produce graphs at after the simulation.

report_dirPath

The directory for saving reports to.

graphics_dirPath

The directory for saving graphics.

csv_dirPath

The directory for saving csvs.

json_dirPath

The directory for saving JSONs containing filtered simulation output.

_route_save_functions(filter_file: str, filtered_pool: dict[str, pool_element_type], produce_graphics: bool, filter_content: dict[str, str | int], json_dir: Path, graphics_dir: Path, csv_dir: Path, direction: str | None) None#

Checks the prefix of the filter_file to determine the format for saving. It then delegates the saving process to the corresponding function to handle specific formats such as JSON, CSV, or graphical output.

_save_to_json(filter_file: str, save_path: Path, filtered_pool: dict[str, pool_element_type], filter_content: dict[str, str | int]) None#

Saves the filtered pool to a JSON file.

Parameters#

filter_filestr

The name of the filter file being processed.

save_pathPath

The directory path where the JSON file will be saved.

filtered_pooldict[str, pool_element_type]

The pool of filtered data to be saved.

filter_contentdict[str, Union[str, int]]

Additional content from the filter that might influence the file naming.

route_logs(log_pool: list[dict[str, str | dict[str, str]]]) None#

Takes logs from other classes and routes them to the appropriate pools in Output Manager.

Parameters#

log_poollist[dict[str, str | dict[str, str]]]

A list of log, warning, and error dictionaries containing all the components needed to log the information to the appropriate pool.

dump_logs(path: Path) None#

Dumps logs_pool into a json file in the given path to a directory.

dump_warnings(path: Path) None#

Dumps warnings_pool into a json file in the given path to a directory.

dump_errors(path: Path) None#

Dumps errors_pool into a json file in the given path to a directory.

report_variables_usage_counts(path: Path) None#

Reports the usage counts of variables in the variables pool to a CSV file in the given path to a directory.

Parameters#

pathPath

The path to the directory where the file will be saved.

dump_variable_names_and_contexts(path: Path, exclude_info_maps: bool, format_option: str) None#

Dumps names of all variables added to variables_pool along with the caller class and function contextual information into a txt file in the given path to a directory.

Parameters#

pathstr

The path to the file to be dumped to.

exclude_info_mapsbool

Flag to denote whether info_map data should be dumped with variable names.

format_option{“block”, “inline”, “verbose”, “basic”}

The selection for the formatting option of the text written to the variables names text file.

Examples#

For the different format options available:

format_option: str = “basic” - Excludes information about whether data is from info_maps but has the same

format as output CSV column headers.

class_name.function_name.variable_name1.sub_variable1_name class_name.function_name.variable_name1.sub_variable2_name class_name.function_name.variable_name2.sub_variable1_name class_name.function_name.variable_name3

format_option: str = “block” class_name.function_name.variable_name

.values: variable1_name .values: variable2_name .info_maps: variable3_name .info_maps: variable4_name

format_option: str = “inline” class_name.function_name.variable_name.values: [variable1_name, variable2_name] class_name.function_name.variable_name.info_maps: [variable3_name, variable4_name]

format_option: str = “verbose” class_name.function_name.variable_name.values: variable1_name class_name.function_name.variable_name.values: variable2_name class_name.function_name.variable_name.info_maps: variable3_name class_name.function_name.variable_name.info_maps: variable4_name

dump_all_nondata_pools(path: Path, exclude_info_maps: bool, format_option: str) None#

Dumps all non-data pools into the given path to a directory.

flush_pools() None#

Sets each pool to an empty dictionary.

load_variables_pool_from_file(file_path: Path) None#

Loads the Output Manager variables pool from file path provided by user.

Parameters#

file_pathPath

The path to the file to be loaded to the variables pool.

Raises#

FileNotFoundError

If the variables pool file does not exist at the specified path.

json.JSONDecodeError

If there is an error in decoding the JSON file.

clear_output_dir(vars_file_path: Path, output_dir: Path) None#

Clears the output directory if vars_file_path not in output directory.

Parameters#

vars_file_pathPath

Path to file used to load Output Manager vars pool.

output_dirPath

The directory for saving output.

is_file_in_dir(dir_path: Path, file_path: Path) bool#

Checks if a file path is in the provided directory.

Parameters#

dir_pathPath

Path to the directory to be checked.

file_pathPath

Path to file to be checked.

create_directory(path: Path) None#

Creates a dir from the provided path if it does not already exist.

Parameters#

pathPath

The path where the directory will be created if it does not already exist.

_get_errors_warnings_logs_counts() tuple[int, int, int]#

Get the total number of errors, warnings, and logs in the output manager’s errors, warnings, and logs pools.

Returns#

tuple[int, int, int]

The total number of errors, warnings, and logs in the output manager’s errors, warnings, and logs pools.

print_credits(version_number: str) None#

Prints out the RuFaS credits when LogVerbosity is set to any level except None.

print_task_id(task_id: str) None#

Prints out the RuFaS credits when LogVerbosity is set to any level except None.

print_errors_warnings_logs_counts(task_id: str) None#

Prints out the RuFaS credits when LogVerbosity is set to any level except None.

set_exclude_info_maps_flag(exclude_info_maps: bool) None#

Sets the exclude_info_maps flag to the given value. Parameters ———- exclude_info_maps : bool

The value to set the exclude_info_maps flag to.

_get_origin_label(filter_content: dict[str, str | int]) OriginLabel#

Retrieves the origin label from the provided filter content.

Parameters#

filter_contentdict[str, str | int]

A dictionary containing filter information, which may include the “origin_label” key.

Returns#

OriginLabel

The origin label corresponding to the value in the filter content. If the “origin_label” key is not present or has an invalid value, OriginLabel.NONE is returned.

Notes#

This method checks the value of the origin_label key in the provided filter_content dictionary. If the value is a valid string matching one of the supported options defined in the OriginLabel enum, the corresponding OriginLabel member is returned. If the value is invalid or the key is not present, OriginLabel.NONE is returned, and an error is added to the Output Manager’s errors pool.

run_startup_sequence(verbosity: LogVerbosity, exclude_info_maps: bool, output_directory: Path, clear_output_directory: bool, chunkification: bool, max_memory_usage_percent: int, max_memory_usage: int, save_chunk_threshold_call_count: int, variables_file_path: Path, output_prefix: str, task_id: str, is_end_to_end_testing_run: bool) None#

Performs various tasks that are needed to setup and run the Output Manager.

validate_filter_content(filters_dir_path: Path) None#

Validates the content of the filters, including keys and values.

Parameters#

filters_dir_pathPath

Path of the directory containing the files containing the keys for filtering.

validate_json_filters(filter_content: dict[Any, Any], filter_name: str) None#

Validate the json filter.

Parameters#

filter_contentdict[Any, Any]

The report filter to validate.

filter_namestr

The name of the filter to validate.

Returns#

None

validate_csv_filters(filter_content: dict[Any, Any], filter_name: str) None#

Validate the csv filter.

Parameters#

filter_contentdict[Any, Any]

The report filter to validate.

filter_namestr

The name of the filter to validate.

Returns#

None

validate_report_filters(filter_content: dict[Any, Any], filter_name: str) None#

Validate the report filter.

Parameters#

filter_contentdict[Any, Any]

The report filter to validate.

filter_namestr

The name of the filter to validate.

Returns#

None

validate_direction(value: Any, content_name: str, filter_name: str) None#

Validates the direction of CSV outputs.

Parameters#

valueAny

The aggregator option to validate.

content_namestr

The corresponding filter option to provide in error reporting.

filter_namestr

Name of the filter to validate.

Returns#

validate_graph_details(value: Any, content_name: str, filter_name: str) None#

Validate the graph details provided.

Parameters#

valueAny

The graph details to validate.

content_namestr

The corresponding filter option to provide in error reporting.

filter_namestr

Name of the filter to validate.

Returns#

None

validate_type(value: Any, content_name: str, filter_name: str, expected: type, type_label: str) None#

Generic type checker.

Parameters#

valueAny

The value to check.

content_namestr

Name of the field, for error messages.

filter_name: str

Name of the filter validated.

expectedtype

A type or tuple of types that value must be an instance of.

type_labelstr

A human-readable description of the type (used in the error message).

validate_aggregator(value: Any, content_name: str, filter_name: str) None#

Validate the aggregator option provided.

Parameters#

valueAny

The aggregator option to validate.

content_namestr

The corresponding filter option to provide in error reporting.

filter_namestr

Name of the filter to validate.

Returns#

None

validate_list_of_strings(value: Any, content_name: str, filter_name: str) None#

Validate filter content that should be list of strings.

Parameters#

valueAny

The filter content to validate.

filter_namestr

The name of the filter to validate.

content_namestr

The corresponding filter option to provide in error reporting.

Returns#

None

validate_dict_of_numbers(value: Any, content_name: str, filter_name: str) None#

Validate filter content that should be a dictionary with string type as keys and int or float as values.

Parameters#

valueAny

The filter content to validate.

content_namestr

The corresponding filter option to provide in error reporting.

filter_namestr

Name of the filter to validate.

Returns#

None

validate_graph_type(value: Any, content_name: str, filter_name: str) None#

Validate the provided graph type in the filter contents.

Parameters#

valueAny

The filter content to validate.

content_namestr

The corresponding filter option to provide in error reporting.

filter_namestr

Name of the filter to validate.

Returns#

None

validate_customization_details(value: Any, content_name: str, filter_name: str) None#

Validate the graph customization details in the filter contents.

Parameters#

valueAny

The filter content to validate.

content_namestr

The corresponding filter option to provide in error reporting.

filter_namestr

Name of the filter to validate.

Returns#

None

instance = <RUFAS.output_manager.OutputManager object>#