diff --git a/docs/api.rst b/docs/api.rst index 0d47458d6..475231fa1 100644 --- a/docs/api.rst +++ b/docs/api.rst @@ -75,8 +75,7 @@ Agents and associated functionalities ------------------------------------- .. automodule:: muse.agents.factories - :members: agents_factory, create_agent, create_retrofit_agent, create_newcapa_agent, - factory + :members: agents_factory, create_agent, create_retrofit_agent, create_newcapa_agent .. autoclass:: muse.agents.agent.AbstractAgent @@ -129,9 +128,11 @@ Constraints: ~~~~~~~~~~~~ .. automodule:: muse.constraints - :members: demand, factory, max_capacity_expansion, max_production, lp_costs, - lp_constraint, lp_constraint_matrix, register_constraints, search_space, - ScipyAdapter + :members: demand, factory, max_capacity_expansion, max_production, + register_constraints, search_space, minimum_service, demand_limiting_capacity + +.. automodule:: muse.lp_adapter + :members: lp_costs, lp_constraint, lp_constraint_matrix, ScipyAdapter Initial and Final Asset Transforms ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -150,9 +151,6 @@ Reading the inputs .. automodule:: muse.readers.csv :members: -.. automodule:: muse.decorators - :members: - --------------- Writing Outputs --------------- @@ -170,6 +168,18 @@ Sectorial Outputs .. automodule:: muse.outputs.sector :members: +Global Outputs +~~~~~~~~~~~~~~ + +.. automodule:: muse.outputs.mca + :members: + +Cache +~~~~~ + +.. automodule:: muse.outputs.cache + :members: + ---------- Quantities @@ -215,6 +225,12 @@ Functionality Registration .. automodule:: muse.registration :members: +Costs +~~~~~ + +.. automodule:: muse.costs + :members: + Utilities ~~~~~~~~~ diff --git a/docs/application-flow.rst b/docs/application-flow.rst index 401a4fa3d..0555177b6 100644 --- a/docs/application-flow.rst +++ b/docs/application-flow.rst @@ -1,7 +1,7 @@ Application Flow ================ -While not essential to be able to use MUSE, it is useful to know the sequence of events that a run of MUSE will follow in a bit more detail that the brief overview of the :ref:`MUSE Overview` section. Let's start with the big picture. +While not essential to be able to use MUSE, it is useful to know the sequence of events that a run of MUSE will follow in a bit more detail than the brief overview of the :doc:`MUSE Overview ` section. Let's start with the big picture. .. note:: @@ -245,7 +245,7 @@ The sequence of steps related to the carbon budget control are as follows: single_year [label="Single year\niteration", fillcolor="lightgrey", style="rounded,filled"] emissions [label="Calculate emissions\nof carbon comodities"] comparison [label="Emissions\n> budget\n", shape=diamond, style=""] - new_price [label="Calculate new\ncarbon price", fillcolor="lightgrey", style="rounded,filled"] + new_price_node [label="Calculate new\ncarbon price", fillcolor="lightgrey", style="rounded,filled"] subgraph cluster_1 { @@ -255,8 +255,8 @@ The sequence of steps related to the carbon budget control are as follows: start -> single_year comparison -> end [label="No", constraint=false] - comparison -> new_price [label="Yes"] - new_price -> end + comparison -> new_price_node [label="Yes"] + new_price_node -> end } The **method used to calculate the new carbon price** can be selected by the user. There are currently only two options for this method, ``fitting`` and ``bisection``, however this can be expanded by the user with the ``@register_carbon_budget_method`` hook in ``muse.carbon_budget``. @@ -440,15 +440,15 @@ The following graph summarises the process. input_search[label="SearchRule\n(Agents.csv)", fillcolor="#ffb3b3", style="rounded,filled"] input_objectives[label="Objective\n(Agents.csv)s", fillcolor="#ffb3b3", style="rounded,filled"] input_decision[label="DecisionMethod\n(Agents.csv)", fillcolor="#ffb3b3", style="rounded,filled"] - input_constrains[label="Constrains\n(settings.toml)", fillcolor="#ffb3b3", style="rounded,filled"] + input_constraints[label="Constraints\n(settings.toml)", fillcolor="#ffb3b3", style="rounded,filled"] input_solver[label="lpsolver\n(settings.toml)", fillcolor="#ffb3b3", style="rounded,filled"] - start -> demand_share -> search -> objectives -> decision -> constrains -> invest -> end + start -> demand_share -> search -> objectives -> decision -> constraints -> invest -> end input_demand -> demand_share input_search -> search input_objectives -> objectives input_decision -> decision - input_constrains -> constrains + input_constraints -> constraints input_solver -> invest } @@ -463,7 +463,7 @@ For those selected replacement technologies, an objective function is computed. Then, a decision is computed. Decision methods reduce multiple objectives into a single scalar objective per replacement technology. The decision method to use is selected in the ``Agents.csv`` file. They allow combining several objectives into a single metric through which replacement technologies can be ranked. See :py:mod:`muse.decisions`. -The final step of preparing the investment process is to compute the constrains, e.g. factors that will determine how much a technology could be invested in and include things like matching the demand, the search rules calculated above, the maximum production of a technology for a given capacity or the maximum capacity expansion for a given time period. Available constrains are set in the subsector section of the ``settings.toml`` file and described in :py:mod:`muse.constrains`. By default, all of them are applied. Note that these constrains might result in unfeasible situations if they do not allow the production to grow enough to match the demand. This is one of the common reasons for a MUSE simulation not converging. +The final step of preparing the investment process is to compute the constraints, e.g. factors that will determine how much a technology could be invested in and include things like matching the demand, the search rules calculated above, the maximum production of a technology for a given capacity or the maximum capacity expansion for a given time period. Available constraints are set in the subsector section of the ``settings.toml`` file and described in :py:mod:`muse.constraints`. By default, all of them are applied. Note that these constraints might result in unfeasible situations if they do not allow the production to grow enough to match the demand. This is one of the common reasons for a MUSE simulation not converging. With all this information, the investment process can proceed. This is done per sector using the method described by the ``lpsolver`` in the ``settings.toml`` file. Available solvers are described in :py:mod:`muse.investments` diff --git a/docs/conf.py b/docs/conf.py index b339c72ad..a09cb7b51 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -50,14 +50,19 @@ add_module_names = False nbsphinx_allow_errors = True autosectionlabel_prefix_document = True +nitpicky = True + +# Suppress warnings for documents not included in any toctree (e.g. release notes) +suppress_warnings = ["toc.not_included"] intersphinx_mapping = { "python": ("https://docs.python.org/3", None), "numpy": ("http://docs.scipy.org/doc/numpy/", None), - "pandas": ("http://pandas.pydata.org/pandas-docs/dev", None), - "xarray": ("http://xarray.pydata.org/en/stable/", None), + "pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None), + "xarray": ("https://xarray.pydata.org/en/stable/", None), } + bibtex_bibfiles: list[str] = [] # -- GraphViz configuration ---------------------------------- diff --git a/docs/inputs/agents.rst b/docs/inputs/agents.rst index 1f168f975..d52b09daf 100644 --- a/docs/inputs/agents.rst +++ b/docs/inputs/agents.rst @@ -7,7 +7,11 @@ Agents Agents are defined using a CSV file, with one agent per row, using a format meant specifically for retrofit and new-capacity agent pairs. -For instance, we have the following CSV table: +Each sector should have an agents file, which should follow the structure +reported in the table below, and be referenced from the TOML settings file using the +``agents`` key. + +This is an example of what an agents file could look like: .. csv-table:: :header: name, type, agent_share, region, objective1, search_rule, decision_method, ... diff --git a/docs/inputs/commodities.rst b/docs/inputs/commodities.rst index eb8625470..57c08ba6b 100644 --- a/docs/inputs/commodities.rst +++ b/docs/inputs/commodities.rst @@ -6,7 +6,8 @@ Global Commodities MUSE handles a configurable number and type of commodities which are primarily used to represent energy, services, pollutants/emissions. The commodities for the simulation as -a whole are defined in a csv file with the following structure. +a whole are defined in a csv file with the following structure, which is referenced from +the TOML settings file using the ``global_commodities`` key. .. csv-table:: Global commodities :header: commodity, description, commodity_type, unit diff --git a/docs/inputs/commodities_io.rst b/docs/inputs/commodities_io.rst index 67b554112..05a693b2a 100644 --- a/docs/inputs/commodities_io.rst +++ b/docs/inputs/commodities_io.rst @@ -1,8 +1,8 @@ .. _inputs-iocomms: -================= +========================= Commodity Inputs/Outputs -================= +========================= **Input** @@ -37,8 +37,10 @@ to cover space heating and water heating energy service demands. :alt: Electric boilers input output commodities -Below it is shown the generic structure of the input commodity file for the electric -heater. +Each sector in MUSE should have two separate CSV files for commodity inputs and outputs, +each of which should follow the structure reported in the table below. These should be +referenced from the TOML settings file using the ``commodities_in`` and ``commodities_out`` keys +respectively. .. csv-table:: Commodities used as consumables - Input commodities :header: technology, region, year, level, electricity diff --git a/docs/inputs/correlation_files.rst b/docs/inputs/correlation_files.rst index 6cbfddf2c..eab59b0a6 100644 --- a/docs/inputs/correlation_files.rst +++ b/docs/inputs/correlation_files.rst @@ -1,3 +1,5 @@ +.. _correlation-files: + Correlation Demand Files ======================== @@ -11,7 +13,9 @@ To do this, a minimum of three files are required: #. A file which dictates how the demand per benchmark year is split across the timeslices. -We will go into the details of each of these files below. +These files (explained in more detail below) should be referenced from the TOML settings +file using the ``macrodrivers_path``, ``regression_path``, and ``timeslice_shares_path`` +keys respectively. Macrodrivers ------------ diff --git a/docs/inputs/existing_capacity.rst b/docs/inputs/existing_capacity.rst index ded240a2f..de5614259 100644 --- a/docs/inputs/existing_capacity.rst +++ b/docs/inputs/existing_capacity.rst @@ -4,12 +4,12 @@ Existing Capacity ========================== -For each technology, the decommissioning profile should be given to MUSE. - -The csv file which provides the installed capacity in base year and the decommissioning -profile in the future periods for each technology in a sector, in each region, should -follow the structure reported in the table. +This file provides the installed capacity in base year and the decommissioning +profile in the future periods for each technology in a sector, in each region. +Each sector should have an existing capacity file, which should follow the structure +reported in the table below, and be referenced from the TOML settings file using the +``existing_capacity`` key. .. csv-table:: Existing capacity of technologies: the residential boiler example :header: technology, region, 2010, 2020, 2030, 2040, 2050 @@ -17,7 +17,6 @@ follow the structure reported in the table. resBoilerElectric, region1, 5, 0.5, 0, 0, 0 resBoilerElectric, region2, 39, 3.5, 1, 0.3, 0 - ``technology`` represents the technology ID and needs to be consistent across all the data inputs. diff --git a/docs/inputs/index.rst b/docs/inputs/index.rst index 3fa89a744..55e6c26b9 100644 --- a/docs/inputs/index.rst +++ b/docs/inputs/index.rst @@ -4,7 +4,30 @@ Input Files =========== -In this section we detail each of the files required to run MUSE. We include information based on how these files should be used, as well as the data that populates them. +In this section we detail each of the files required to run MUSE. +We include information based on how these files should be used, as well as the data that populates them. + +All MUSE simulations require a settings file in TOML format (see :ref:`toml-primer` and :ref:`simulation-settings`), as well as a set of CSV files that provide the simulation data. + +Whilst file names and paths are fully flexible and can be configured via the settings TOML, +a typical minimal file layout might look something like this: + +model_name/ + - :ref:`settings.toml ` + - :ref:`GlobalCommodities.csv ` + - :ref:`Projections.csv ` + - sector1/ + - :ref:`Technodata.csv ` + - :ref:`CommoditiesIn.csv ` + - :ref:`CommoditiesOut.csv ` + - :ref:`ExistingCapacity.csv ` + - :ref:`Agents.csv ` + - presets/ + - :ref:`Consumption2020.csv ` + - etc. + +Note, however, that this is just a convention for simple models, and more complex models may benefit from or require a different file structure. +See full documentation below for more details on the settings TOML and all the different types of data file. .. toctree:: :maxdepth: 2 diff --git a/docs/inputs/inputs_csv.rst b/docs/inputs/inputs_csv.rst index b0ecfed56..6d7dbdf06 100644 --- a/docs/inputs/inputs_csv.rst +++ b/docs/inputs/inputs_csv.rst @@ -2,6 +2,8 @@ Simulation Data Files ===================== +.. _inputs_csv: + This section details the CSV files that are used to populate the simulation data. .. toctree:: @@ -15,4 +17,5 @@ This section details the CSV files that are used to populate the simulation data commodities_io existing_capacity agents + preset_commodity_demands correlation_files diff --git a/docs/inputs/preset_commodity_demands.rst b/docs/inputs/preset_commodity_demands.rst new file mode 100644 index 000000000..cc81455d7 --- /dev/null +++ b/docs/inputs/preset_commodity_demands.rst @@ -0,0 +1,33 @@ +.. _preset-consumption-file: + +Preset Commodity Demands +============================= + + +This document describes the CSV files used to supply pre-set commodity consumption +profiles to MUSE. These files are referenced from the TOML setting ``consumption_path`` +and are typically provided one file per year (file names must include the year, e.g. +``Consumption2015.csv``). Wildcards are supported in the path (for example +``{cwd}/Consumption*.csv``). + +The CSV format should follow the structure shown in the example below. + +.. csv-table:: Consumption + :header: "RegionName", "Timeslice", "electricity", "diesel", "algae" + :stub-columns: 2 + + USA,1,1.9,0,0 + USA,2,1.8,0,0 + +``RegionName`` + The region identifier. Must match region IDs used across other inputs. + +``Timeslice`` + Index of the timeslice, according to the timeslice definition in the settings TOML. + Indexing starts at 1 (i.e. the first timeslice defined in + the global timeslices definition is 1, the second is 2, etc). + +Commodities (one column per commodity) + Any additional columns represent commodities. Column names must match the + commodity identifiers defined in the global commodities file. Values are the + consumption quantities for that timeslice and region. diff --git a/docs/inputs/projections.rst b/docs/inputs/projections.rst index 5de947b2c..cc63fa320 100644 --- a/docs/inputs/projections.rst +++ b/docs/inputs/projections.rst @@ -1,8 +1,7 @@ .. _inputs-projection: -========================= Commodity Price Projections -========================= +=========================== This file can be used to supply pre-set prices for commodities. The interpretation of these prices depends on the type of commodity: @@ -25,7 +24,8 @@ The interpretation of these prices depends on the type of commodity: Lack of a price trajectory will be interpreted as a price of 0 for all periods (i.e. no levy on production), again with the exception of the carbon budget mode. -The price trajectory should follow the structure shown in the table below. +Price trajectories should be stored in a CSV file with the structure shown in the +table below, and referenced from the TOML settings file using the ``projections`` key. .. csv-table:: Initial market projections :header: region, attribute, year, com1, com2, com3 diff --git a/docs/inputs/technodata.rst b/docs/inputs/technodata.rst index c6c5faf6e..5a55cb17f 100644 --- a/docs/inputs/technodata.rst +++ b/docs/inputs/technodata.rst @@ -5,7 +5,9 @@ Technodata =========== The technodata includes the techno-economic characteristics of each technology such as capital, fixed and variable cost, lifetime, utilization factor. -The technodata should follow the structure reported in the table below. +Models should have one technodata file for each sector, which is referenced +in the TOML settings file using the ``technodata`` key. +Technodata files should follow the structure reported in the table below. In this example, we show an electric boiler for a generic region, region1: .. csv-table:: Technodata diff --git a/docs/inputs/technodata_timeslices.rst b/docs/inputs/technodata_timeslices.rst index 616678aba..fe0c58274 100644 --- a/docs/inputs/technodata_timeslices.rst +++ b/docs/inputs/technodata_timeslices.rst @@ -6,6 +6,8 @@ Technodata Timeslices The techno-data timeslices is an optional file which allows technology utilization factors and minimum service factors to be specified for each timeslice. For instance, if you were to model solar photovoltaics, you would probably want to specify that they can not produce any electricity at night, or if you're modelling a nuclear power plant, that they must generate a minimum amount of electricity. +Technodata timeslice files, if present, should follow the structure reported in the table below, and be referenced from the TOML settings file using the ``technodata_timeslices`` key. + .. csv-table:: Techno-data :header: technology,region,year,month,day,hour,utilization_factor,minimum_service_factor diff --git a/docs/inputs/toml.rst b/docs/inputs/toml.rst index 4ac3f0746..e793b78ed 100644 --- a/docs/inputs/toml.rst +++ b/docs/inputs/toml.rst @@ -1,8 +1,8 @@ .. _simulation-settings: -===================== +============================= Simulation settings TOML file -===================== +============================= .. currentmodule:: muse @@ -85,7 +85,7 @@ a whole. ``plugins`` (optional) Path or list of paths to extra python plugins, i.e. files with registered functions - such as :py:meth:`~muse.outputs.register_output_quantity`. + such as :py:meth:`~muse.outputs.mca.register_output_quantity`. ------------------ @@ -228,12 +228,11 @@ A sector accepts these attributes: ``technodata`` Path to a csv file containing the characterization of the technologies involved in - the sector, e.g. lifetime, capital costs, etc... See :ref:`inputs-technodata`. + the sector, e.g. lifetime, capital costs, etc.. See :ref:`inputs-technodata`. ``technodata_timeslices`` (optional) Path to a csv file describing the utilization factor and minimum service - factor of each technology in each timeslice. - See :ref:`user_guide/inputs/technodata_timeslices`. + factor of each technology in each timeslice. See :ref:`inputs-technodata-ts`. ``commodities_in`` Path to a csv file describing the inputs of each technology involved in the sector. @@ -259,7 +258,7 @@ Sectors contain a number of subsections: different commodities. There must be at least one subsector, and there can be as many as required. For instance, a one-subsector setup would look like: - .. code-block:: toml + .. code-block:: TOML [sectors.gas.subsectors.all] agents = '{path}/gas/Agents.csv' @@ -267,7 +266,7 @@ Sectors contain a number of subsections: A two-subsector could look like: - .. code-block:: toml + .. code-block:: TOML [sectors.gas.subsectors.methane_and_ethanol] agents = '{path}/gas/me_agents.csv' @@ -285,19 +284,18 @@ Sectors contain a number of subsections: ``agents`` Path to a csv file describing the agents in the sector. - See :ref:`user_guide/inputs/agents:agents`. + See :ref:`inputs-agents`. ``existing_capacity`` Path to a csv file describing the initial capacity of the sector. - See :ref:`user_guide/inputs/existing_capacity:existing sectoral capacity`. + See :ref:`inputs-existing-capacity`. ``lpsolver`` (optional, default = **scipy**) The solver for linear problems to use when figuring out investments. The solvers are registered via :py:func:`~muse.investments.register_investment`. At time of writing, three are available: - - **scipy** solver (default from v1.3): Formulates investment as a true LP problem and solves it using - the `scipy solver`_. + - **scipy** solver (default from v1.3): Formulates investment as a true LP problem and solves it using the ``scipy`` solver. - **adhoc** solver: Simple in-house solver that ranks the technologies according to cost and service the demand incrementally. @@ -356,8 +354,8 @@ Sectors contain a number of subsections: ``quantity`` Name of the quantity to save. The options are capacity, consumption, supply and costs. - Users can also customize and create further output quantities by registering with MUSE via - :py:func:`muse.outputs.register_output_quantity`. See :py:mod:`muse.outputs` for more details. + Users can also customize and create further output quantities by registering with + MUSE via :py:func:`muse.outputs.mca.register_output_quantity`. See :py:mod:`muse.outputs.sector` for more details. ``sink`` the sink is the place (disk, cloud, database, etc...) and format with which @@ -365,7 +363,7 @@ Sectors contain a number of subsections: implemented. The following sinks are available: "csv", "netcfd", "excel" and "aggregate". Additional sinks can be added by interested users, and registered with MUSE via - :py:func:`muse.outputs.register_output_sink`. See :py:mod:`muse.outputs` for more details. + :py:func:`muse.outputs.sinks.register_output_sink`. See :py:mod:`muse.outputs.sinks` for more details. ``filename`` defines the format of the file where to save the data. There are several @@ -456,8 +454,8 @@ Sectors contain a number of subsections: .. code-block:: TOML [[sectors.commercial.interactions]] - net = {"name": "some_net", "param": "some value"} - interaction = {"name": "some_interaction", "param": "some other value"} + net = { name = "some_net", param = "some value" } + interaction = { name = "some_interaction", param = "some other value" } The parameters will depend on the net and interaction functions. Neither "new_to_retro" nor "transfer" take any arguments at this point. MUSE interaction @@ -472,10 +470,10 @@ Preset sectors -------------- The commodity production, commodity consumption and product prices of preset sectors are determined -exogeneously. They are know from the start of the simulation and are not affected by the +exogeneously. They are known from the start of the simulation and are not affected by the simulation. -A common example would be the following, where commodity consumption is defined exogeneously: +A common example would be the following, where commodity consumption is defined exogeneously (see :ref:`consumption_path `): .. code-block:: TOML @@ -484,7 +482,7 @@ A common example would be the following, where commodity consumption is defined priority = 0 consumption_path = "{path}/commercial_presets/*Consumption.csv" -Alternatively, you may define consumption as a function of macro-economic data, i.e. population and GDP: +Alternatively, you may define consumption as a function of macro-economic data, i.e. population and GDP (see :ref:`correlation-files`): .. code-block:: TOML @@ -512,19 +510,7 @@ The following attributes are accepted: current working directory. The file names must include the year for which it defines the consumption, e.g. `Consumption2015.csv`. - The CSV format should follow the following format: - - .. csv-table:: Consumption - :header: "RegionName", "Timeslice", "electricity", "diesel", "algae" - :stub-columns: 2 - - USA,1,1.9,0,0 - USA,2,1.8,0,0 - - The "RegionName" and "Timeslice" columns must be present. - Further columns are reserved for commodities. "Timeslice" refers to the - index of the timeslice. Timeslices should be defined consistently to the sectoral - level timeslices. + The CSV format should follow the format described in the :ref:`Preset commodity demands ` document. ``supply_path`` CSV file, one per year, indicating the amount of commodities produced. It follows @@ -561,9 +547,10 @@ The following attributes are accepted: :ref:`macrodrivers_path`. -------------- +.. _carbon-market: + Carbon market (optional) -------------- +------------------------ This section contains the settings related to the modelling of the carbon market. If omitted, it defaults to not including the carbon market in the simulation. @@ -628,15 +615,15 @@ For example -------------- +--------------------------------- Output cache (for advanced users) -------------- +--------------------------------- ``outputs_cache`` This option behaves exactly like `outputs` for sectors and accepts the same options but controls the output of cached quantities instead. This option is NOT available for - sectors themselves (i.e using `[[sector.commercial.outputs_cache]]` will have no effect). See - :py:mod:`muse.outputs.cache` for more details. + sectors themselves (i.e using `[[sector.commercial.outputs_cache]]` will have no effect). + See :py:mod:`muse.outputs.cache` for more details. A single row looks like this: diff --git a/docs/inputs/toml_primer.rst b/docs/inputs/toml_primer.rst index 512b6f8f1..c98c5b6b4 100644 --- a/docs/inputs/toml_primer.rst +++ b/docs/inputs/toml_primer.rst @@ -33,7 +33,7 @@ three examples are equivalent: .. code-block:: TOML [sectors.residential] - production = {"name": "match", "costing": "prices"} + production = { name = "match", costing = "prices" } .. code-block:: TOML diff --git a/docs/installation/pipx-based.rst b/docs/installation/pipx-based.rst index 4aed5c30d..5d44792ca 100644 --- a/docs/installation/pipx-based.rst +++ b/docs/installation/pipx-based.rst @@ -100,6 +100,8 @@ There are multiple ways of installing Python, as well as multiple distributions. If you have Anaconda Python installed, then you can use it instead of ``pyenv`` to create an environment with a suitable Python version. Go to section :ref:`conda-venvs` and jump to `Installing pipx`_ when it is completed. +.. _pipx-based-installing-pyenv: + Installing ``pyenv`` ^^^^^^^^^^^^^^^^^^^^ @@ -108,13 +110,13 @@ Installing ``pyenv`` To install ``pyenv``, follow these steps: - **Linux**: In this case, you will need to clone the GitHub repository using ``git``. Most Linux distributions come with ``git`` installed, so this should work out of the box. -Then, complete the setup by adding ``pyenv`` to your profile, so the executable can be found. You can `check the instructions in the official webpage `_, -or follow the below commands that were tested on `Ubuntu 22.04 LTS` using its popular `bash shell` and `z-shell`. To be specific, we tested them -on `GNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu)` and `zsh 5.8.1 (x86_64-ubuntu-linux-gnu)`. + Then, complete the setup by adding ``pyenv`` to your profile, so the executable can be found. You can `check the instructions in the official webpage `_, + or follow the below commands that were tested on `Ubuntu 22.04 LTS` using its popular `bash shell` and `z-shell`. To be specific, we tested them + on `GNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu)` and `zsh 5.8.1 (x86_64-ubuntu-linux-gnu)`. -Now, we go through the installation procedure of ``pyenv`` on Linux, step-by-step: + Now, we go through the installation procedure of ``pyenv`` on Linux, step-by-step: - .. code-block:: + .. code-block:: # Step 1: Install essential libraries needed for pyenv sudo apt install -y make build-essential libssl-dev zlib1g-dev \ @@ -154,9 +156,8 @@ Now, we go through the installation procedure of ``pyenv`` on Linux, step-by-ste Then, complete the setup by adding ``pyenv`` to your profile, so the executable can be found. `Check the instructions in the official webpage `_. -- **Windows**: ``pyenv-win`` is a separate project but it has the same functionality and it is also simpler to setup. -You can read the detailed installation instructions `from the official pyenv-win website `_, -but the easiest way is to run the following command in the ``powershell`` and, upon closing and launching a new shell, you should be ready to go: + +- **Windows**: ``pyenv-win`` is a separate project but it has the same functionality and it is also simpler to setup. You can read the detailed installation instructions `from the official pyenv-win website `_, but the easiest way is to run the following command in the ``powershell`` and, upon closing and launching a new shell, you should be ready to go: .. code-block:: powershell @@ -180,7 +181,7 @@ but the easiest way is to run the following command in the ``powershell`` and, u Installing your chosen Python version -^^^^^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ With ``pyenv`` installed and correctly configured, it is now easy to install any Python version we want. To see the versions available run: diff --git a/docs/source/muse.outputs.rst b/docs/source/muse.outputs.rst deleted file mode 100644 index d68e8e1e9..000000000 --- a/docs/source/muse.outputs.rst +++ /dev/null @@ -1,46 +0,0 @@ -muse.outputs package -==================== - -Submodules ----------- - -muse.outputs.mca module ------------------------ - -.. automodule:: muse.outputs.mca - :members: - :undoc-members: - :show-inheritance: - -muse.outputs.cache module -------------------------- - -.. automodule:: muse.outputs.cache - :members: - :undoc-members: - :show-inheritance: - -muse.outputs.sector module --------------------------- - -.. automodule:: muse.outputs.sector - :members: - :undoc-members: - :show-inheritance: - -muse.outputs.sinks module -------------------------- - -.. automodule:: muse.outputs.sinks - :members: - :undoc-members: - :show-inheritance: - - -Module contents ---------------- - -.. automodule:: muse.outputs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/muse.readers.rst b/docs/source/muse.readers.rst deleted file mode 100644 index f5a111282..000000000 --- a/docs/source/muse.readers.rst +++ /dev/null @@ -1,30 +0,0 @@ -muse.readers package -==================== - -Submodules ----------- - -muse.readers.csv module ------------------------ - -.. automodule:: muse.readers.csv - :members: - :undoc-members: - :show-inheritance: - -muse.readers.toml module ------------------------- - -.. automodule:: muse.readers.toml - :members: - :undoc-members: - :show-inheritance: - - -Module contents ---------------- - -.. automodule:: muse.readers - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/muse.sectors.rst b/docs/source/muse.sectors.rst deleted file mode 100644 index c6f0db570..000000000 --- a/docs/source/muse.sectors.rst +++ /dev/null @@ -1,54 +0,0 @@ -muse.sectors package -==================== - -Submodules ----------- - -muse.sectors.abstract module ----------------------------- - -.. automodule:: muse.sectors.abstract - :members: - :undoc-members: - :show-inheritance: - -muse.sectors.preset\_sector module ----------------------------------- - -.. automodule:: muse.sectors.preset_sector - :members: - :undoc-members: - :show-inheritance: - -muse.sectors.register module ----------------------------- - -.. automodule:: muse.sectors.register - :members: - :undoc-members: - :show-inheritance: - -muse.sectors.sector module --------------------------- - -.. automodule:: muse.sectors.sector - :members: - :undoc-members: - :show-inheritance: - -muse.sectors.subsector module ------------------------------ - -.. automodule:: muse.sectors.subsector - :members: - :undoc-members: - :show-inheritance: - - -Module contents ---------------- - -.. automodule:: muse.sectors - :members: - :undoc-members: - :show-inheritance: diff --git a/src/muse/carbon_budget.py b/src/muse/carbon_budget.py index d12f2dbc6..37105dbe6 100644 --- a/src/muse/carbon_budget.py +++ b/src/muse/carbon_budget.py @@ -102,7 +102,7 @@ def fitting( resolution: Number of decimal places to solve the carbon price to Returns: - new_price: adjusted carbon price to meet budget + Adjusted carbon price to meet budget """ # Calculate the carbon price and emissions threshold in the investment year future = market.year[-1] diff --git a/src/muse/costs.py b/src/muse/costs.py index 004fc69db..a4d755676 100644 --- a/src/muse/costs.py +++ b/src/muse/costs.py @@ -26,21 +26,22 @@ The dimensions of the output will be the sum of all dimensions from the input data, minus "commodity", plus "timeslice" (if not already present). -Some functions have a `method` argument, which can be "annual" or "lifetime": - -Costs can either be annual or lifetime: -- annual: calculates the cost in a single year -- lifetime: calculates the total cost over the lifetime of the - technology, using the `technical_life` attribute from the `technologies` dataset. - - In this case, technology parameters, production, consumption, capacity and prices - are assumed to be constant over the lifetime of the technology. The cost in each - year is discounted according to the `interest_rate` attribute from the - `technologies` dataset, and summed across years. - - Capital costs are different, as these are a one time cost for the lifetime of the - technology. This can be annualized by dividing by the `technical_life`. -Some functions can calculate both lifetime and annual costs, with a `method` argument -to specify. Others can only calculate one or the other (see individual function -docstrings for more details). +Some functions have a `method` argument, which can be either ``"annual"`` or +``"lifetime"``. In brief: + +- ``annual``: calculates the cost in a single year. +- ``lifetime``: calculates the total cost over the lifetime of the technology, + using the `technical_life` attribute from the `technologies` dataset. In this + case, technology parameters, production, consumption, capacity and prices are + assumed constant over the lifetime; annual costs are discounted using the + `interest_rate` attribute from the `technologies` dataset and summed across years. + +Capital costs are different, as these are a one-time cost for the lifetime of the +technology. These can be annualized by dividing by `technical_life`. + +Some functions can calculate both lifetime and annual costs (use the ``method`` +argument to select); others implement only one of these modes (see individual +function docstrings for details). """ @@ -87,9 +88,10 @@ def capital_costs( `capacity` input. Method can be "lifetime" or "annual": - - lifetime: returns the full capital costs - - annual: total capital costs are multiplied by the capital recovery factor to get - annualized costs + + - ``lifetime``: returns the full capital costs. + - ``annual``: total capital costs are multiplied by the capital recovery factor to + obtain annualized costs. """ if method not in ["lifetime", "annual"]: raise ValueError("method must be either 'lifetime' or 'annual'.") @@ -249,11 +251,13 @@ def net_present_value( ) -> xr.DataArray: """Net present value (NPV) of the relevant technologies. - The net present value of a technology is the present value of all the revenues that - a technology earns over its lifetime minus all the costs of installing and operating - it. Follows the definition of the `net present cost`_ given by HOMER Energy. - .. _net present cost: - .. https://www.homerenergy.com/products/pro/docs/3.15/net_present_cost.html + The net present value of a technology is the present value of all the revenues that + a technology earns over its lifetime minus all the costs of installing and + operating it. Follows the definition of the `net present cost`_ given by HOMER + Energy. + + .. _net present cost: + https://www.homerenergy.com/products/pro/docs/3.15/net_present_cost.html - energy commodities INPUTS are related to fuel costs - environmental commodities OUTPUTS are related to environmental costs @@ -426,17 +430,17 @@ def levelized_cost_of_energy( :py:func:`running_costs` Can calculate either a lifetime or annual LCOE. - - lifetime: the average cost per unit of production over the entire lifetime of the - technology. - Annual running costs and production are calculated for the full lifetime of the - technology, and adjusted to a present value using the discount rate. Total - costs (running costs over the lifetime + initial capital costs) are then divided - by total production to get the average cost per unit of production. - - annual: the average cost per unit of production in a single year. - Annual running costs and production are calculated for a single year. Capital - costs are multiplied by the capital recovery factor to get an annualized cost. - Total costs (annualized capital costs + running costs) are then divided by - production to get the average cost per unit of production. + + - ``lifetime``: the average cost per unit of production over the entire lifetime + of the technology. Annual running costs and production are calculated for the + full lifetime and adjusted to present value using the discount rate. Total + costs (running costs over the lifetime + initial capital costs) are divided by + total production to obtain the average cost per unit. + + - ``annual``: the average cost per unit of production in a single year. Annual + running costs and production are calculated for a single year, capital costs + are annualized using the capital recovery factor, and total costs are divided + by production to obtain the average cost per unit. Arguments: technologies: xr.Dataset of technology parameters diff --git a/src/muse/demand_share.py b/src/muse/demand_share.py index 991ca939d..746ca3e76 100644 --- a/src/muse/demand_share.py +++ b/src/muse/demand_share.py @@ -1,5 +1,7 @@ """Demand share computations. +.. currentmodule:: muse.demand_share + The demand share splits a demand amongst agents. It is used within a sector to assign part of the input MCA demand to each agent. @@ -62,6 +64,7 @@ def demand_share( "factory", "new_and_retro", "register_demand_share", + "standard_demand", "unmet_demand", "unmet_forecasted_demand", ] @@ -208,9 +211,8 @@ def new_and_retro( {\sum_{i, t, \iota}P[\mathcal{A}_{s, t, \iota}^{r, i}(y)]} - #. similarly, each *retrofit* agent gets a share of :math:`N` proportional to it's - share of the :py:func:`decommissioning demand - `, :math:`D^{r, i}_{t, c}`. + #. similarly, each *retrofit* agent gets a share of :math:`N` proportional to its + share of the ``decommissioning_demand``, :math:`D^{r, i}_{t, c}`. Then the share of the demand for retrofit agent :math:`i` is: .. math:: @@ -225,12 +227,10 @@ def new_and_retro( disaggregated over each technology, rather than not over each *model* of each technology (asset). - .. SeeAlso:: + .. seealso:: - :ref:`indices`, :ref:`quantities`, - :ref:`Agent investments`, - :py:func:`~muse.quantities.decommissioning_demand`, - :py:func:`~muse.quantities.maximum_production` + ``decommissioning_demand``, + :py:func:`muse.quantities.maximum_production` """ current_year, investment_year = map(int, demand.year.values) @@ -493,7 +493,8 @@ def unmet_demand( The resulting expression has the same indices as the consumption :math:`\mathcal{C}_{c, s}^r`. - :math:`P` is the maximum production, given by . + :math:`P` is the maximum production, given by + :py:func:`muse.quantities.maximum_production`. """ from muse.quantities import maximum_production @@ -540,7 +541,7 @@ def new_consumption( \right) Where :math:`P` the maximum production by existing assets, given by - . + :py:func:`muse.quantities.maximum_production`. """ # Validate inputs have matching years if not ( diff --git a/src/muse/filters.py b/src/muse/filters.py index 6f26721c3..941525473 100644 --- a/src/muse/filters.py +++ b/src/muse/filters.py @@ -84,6 +84,7 @@ def search_space_initializer( "same_enduse", "same_fuels", "similar_technology", + "spend_limit", "with_asset_technology", ] diff --git a/src/muse/objectives.py b/src/muse/objectives.py index e2f0c5050..94a8c9db8 100644 --- a/src/muse/objectives.py +++ b/src/muse/objectives.py @@ -420,7 +420,7 @@ def annual_levelized_cost_of_energy( It needs to be used for trade agents where the actual service is unknown. It follows the `simplified LCOE` given by NREL. - See :py:func:`muse.costs.annual_levelized_cost_of_energy` for more details. + See :py:func:`muse.costs.levelized_cost_of_energy` for more details. """ from muse.costs import levelized_cost_of_energy as LCOE @@ -464,7 +464,7 @@ def lifetime_levelized_cost_of_energy( ): """Levelized cost of energy (LCOE) of technologies over their lifetime. - See :py:func:`muse.costs.lifetime_levelized_cost_of_energy` for more details. + See :py:func:`muse.costs.levelized_cost_of_energy` for more details. The LCOE is set to zero for those timeslices where the production is zero, normally due to a zero utilization factor. diff --git a/src/muse/readers/csv.py b/src/muse/readers/csv.py index 9a5018f63..e0bc9b625 100644 --- a/src/muse/readers/csv.py +++ b/src/muse/readers/csv.py @@ -1,19 +1,22 @@ """Ensemble of functions to read MUSE data. In general, there are three functions per input file: -`read_x`: This is the overall function that is called to read the data. It takes a - `Path` as input, and returns the relevant data structure (usually an xarray). The - process is generally broken down into two functions that are called by `read_x`: - -`read_x_csv`: This takes a path to a csv file as input and returns a pandas dataframe. - There are some consistency checks, such as checking data types and columns. There - is also some minor processing at this stage, such as standardising column names, - but no structural changes to the data. The general rule is that anything returned - by this function should still be valid as an input file if saved to csv. -`process_x`: This is where more major processing and reformatting of the data is done. - It takes the dataframe from `read_x_csv` and returns the final data structure - (usually an xarray). There are also some more checks (e.g. checking for nan - values). + +- ``read_x``: This is the overall function that is called to read the data. It takes a + ``Path`` as input, and returns the relevant data structure (usually an xarray). The + process is generally broken down into two functions that are called by ``read_x``: + +- ``read_x_csv``: This takes a path to a csv file as input and returns a pandas + DataFrame. There are some consistency checks, such as checking data types and + columns. There is also some minor processing at this stage, such as standardising + column names, but no structural changes to the data. The general rule is that + anything returned by this function should still be valid as an input file if saved + to CSV. + +- ``process_x``: This is where more major processing and reformatting of the data is + done. It takes the DataFrame from ``read_x_csv`` and returns the final data + structure (usually an xarray). There are also some more checks (e.g. checking for + NaN values). Most of the processing is shared by a few helper functions: - read_csv: reads a csv file and returns a dataframe diff --git a/src/muse/timeslices.py b/src/muse/timeslices.py index ac1b43e9d..c7603f968 100644 --- a/src/muse/timeslices.py +++ b/src/muse/timeslices.py @@ -168,6 +168,7 @@ def compress_timeslice( """Convert a fully timesliced array to a coarser level. The operation can be either 'sum', or 'mean': + - sum: sum values at each compressed timeslice level - mean: take a weighted average of values at each compressed timeslice level, according to the timeslice weights in ts @@ -229,8 +230,9 @@ def expand_timeslice( """Convert a timesliced array to a finer level. The operation can be either 'distribute', or 'broadcast' + - distribute: distribute values over the new timeslice level(s) according to - timeslice weights in `ts`, such that the sum of the output over all timeslices + timeslice weights in ``ts``, such that the sum of the output over all timeslices is equal to the sum of the input - broadcast: broadcast values across over the new timeslice level(s) diff --git a/src/muse/utilities.py b/src/muse/utilities.py index dac539e74..66843382e 100644 --- a/src/muse/utilities.py +++ b/src/muse/utilities.py @@ -201,21 +201,20 @@ def broadcast_over_assets( example, it could also be used on a dataset of commodity prices to select prices relevant to each asset (e.g. if assets exist in multiple regions). - Arguments: - data: The dataset/data-array to broadcast - template: The dataset/data-array to use as a template - installed_as_year: True means that the "year" dimension in 'data` + Args: + data: The dataset/data-array to broadcast. + template: The dataset/data-array to use as a template. + installed_as_year: True means that the ``year`` dimension in ``data`` corresponds to the year that the asset was installed. This will commonly - be the case for most technology parameters (e.g. var_par/fix_par are - specified the year that an asset is installed, and fixed for the lifetime of - the asset). In this case, `data` must have a year coordinate for every - possible "installed" year in the template. - - Conversely, if the values in `data` apply to the year of activity, rather - than the year of installation, `installed_as_year` should be False. + be the case for most technology parameters (e.g. ``var_par``/``fix_par`` are + specified for the year that an asset is installed, and fixed for the + lifetime of the asset). In this case, ``data`` must have a ``year`` + coordinate for every possible ``installed`` year in the template. + Conversely, if the values in ``data`` apply to the year of activity, rather + than the year of installation, ``installed_as_year`` should be False. An example would be commodity prices, which can change over the lifetime - of an asset. In this case, if "year" is present as a dimension in `data`, - it will be maintained as a separate dimension in the output. + of an asset. In this case, if ``year`` is present as a dimension in + ``data``, it will be maintained as a separate dimension in the output. Example: Define the data array: diff --git a/tests/conftest.py b/tests/conftest.py index 3cebe1449..d5d728e82 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -571,7 +571,6 @@ def saveme(module_name: str, registry_name: str): saveme("muse.carbon_budget", "CARBON_BUDGET_METHODS"), saveme("muse.constraints", "CONSTRAINTS"), saveme("muse.decisions", "DECISIONS"), - saveme("muse.decorators", "SETTINGS_CHECKS"), saveme("muse.demand_share", "DEMAND_SHARE"), saveme("muse.filters", "FILTERS"), saveme("muse.hooks", "INITIAL_ASSET_TRANSFORM"),