Tutorial: Electricity-Only#
Note
If you have not done it yet, follow the Installation steps first.
In this tutorial, we will build a heavily simplified power system model for Belgium. But before getting started with PyPSA-Eur it makes sense to be familiar with its general modelling framework PyPSA.
Running the tutorial requires limited computational resources compared to the
full model, which allows the user to explore most of its functionalities on a
local machine. The tutorial will cover examples on how to configure and
customise the PyPSA-Eur model and run the snakemake
workflow step by step
from network creation to the solved network. The configuration for the tutorial
is located at config/test/config.electricity.yaml
. It includes parts deviating from
the default config file config/config.default.yaml
. To run the tutorial with this
configuration, execute
snakemake -call results/test-elec/networks/base_s_6_elec_lcopt_.nc --configfile config/test/config.electricity.yaml
This configuration is set to download a reduced cutout via the rule retrieve_cutout
.
For more information on the data dependencies of PyPSA-Eur, continue reading Retrieving Data.
How to configure runs?#
The model can be adapted to only include selected countries (e.g. Belgium) instead of all European countries to limit the spatial scope.
countries: ['BE']
Likewise, the example’s temporal scope can be restricted (e.g. to a single week).
snapshots:
start: "2013-03-01"
end: "2013-03-08"
It is also possible to allow less or more carbon-dioxide emissions. Here, we limit the emissions of Belgium to 100 Mt per year.
electricity:
co2limit_enable: true
co2limit: 100.e+6
PyPSA-Eur also includes a database of existing conventional powerplants. We can select which types of existing powerplants we like to be extendable:
extendable_carriers:
Generator: [OCGT]
StorageUnit: [battery]
Store: [H2]
Link: [H2 pipeline]
To accurately model the temporal and spatial availability of renewables such as wind and solar energy, we rely on historical weather data. It is advisable to adapt the required range of coordinates to the selection of countries.
atlite:
default_cutout: be-03-2013-era5
cutouts:
be-03-2013-era5:
module: era5
x: [4., 15.]
y: [46., 56.]
time: ["2013-03-01", "2013-03-08"]
We can also decide which weather data source should be used to calculate potentials and capacity factor time-series for each carrier. For example, we may want to use the ERA-5 dataset for solar and not the default SARAH-3 dataset.
solar:
cutout: be-03-2013-era5
Finally, it is possible to pick a solver. For instance, this tutorial uses the open-source solver GLPK.
solver:
name: highs
options: highs-default
Note, that config/test/config.electricity.yaml
only includes changes relative to
the default configuration. There are many more configuration options, which are
documented at Configuration.
How to use snakemake
rules?#
Open a terminal, go into the PyPSA-Eur directory, and activate the pypsa-eur
environment with
mamba activate pypsa-eur
Let’s say based on the modifications above we would like to solve a very simplified model clustered down to 6 buses and every 24 hours aggregated to one snapshot. The command
snakemake -call results/test-elec/networks/base_s_6_elec_lcopt_.nc --configfile config/test/config.electricity.yaml
orders snakemake
to run the rule solve_network
that produces the solved network and stores it in results/networks
with the name base_s_6_elec_lcopt_.nc
:
rule solve_network:
params:
solving=config_provider("solving"),
foresight=config_provider("foresight"),
planning_horizons=config_provider("scenario", "planning_horizons"),
co2_sequestration_potential=config_provider(
"sector", "co2_sequestration_potential", default=200
),
custom_extra_functionality=input_custom_extra_functionality,
input:
network=resources("networks/base_s_{clusters}_elec_l{ll}_{opts}.nc"),
output:
network=RESULTS + "networks/base_s_{clusters}_elec_l{ll}_{opts}.nc",
config=RESULTS + "configs/config.base_s_{clusters}_elec_l{ll}_{opts}.yaml",
log:
solver=normpath(
RESULTS
+ "logs/solve_network/base_s_{clusters}_elec_l{ll}_{opts}_solver.log"
),
python=RESULTS
+ "logs/solve_network/base_s_{clusters}_elec_l{ll}_{opts}_python.log",
benchmark:
(RESULTS + "benchmarks/solve_network/base_s_{clusters}_elec_l{ll}_{opts}")
threads: solver_threads
resources:
mem_mb=memory,
runtime=config_provider("solving", "runtime", default="6h"),
shadow:
"shallow"
conda:
"../envs/environment.yaml"
script:
"../scripts/solve_network.py"
This triggers a workflow of multiple preceding jobs that depend on each rule’s inputs and outputs:
In the terminal, this will show up as a list of jobs to be run:
Building DAG of jobs...
Job stats:
job count
------------------------------------- -------
add_electricity 1
add_transmission_projects_and_dlr 1
base_network 1
build_electricity_demand 1
build_electricity_demand_base 1
build_line_rating 1
build_powerplants 1
build_renewable_profiles 6
build_shapes 1
build_ship_raster 1
build_transmission_projects 1
cluster_network 1
determine_availability_matrix 6
prepare_network 1
retrieve_cost_data 1
retrieve_cutout 1
retrieve_databundle 1
retrieve_eez 1
retrieve_electricity_demand 1
retrieve_naturalearth_countries 1
retrieve_osm_prebuilt 1
retrieve_ship_raster 1
retrieve_synthetic_electricity_demand 1
simplify_network 1
solve_network 1
total 35
snakemake
then runs these jobs in the correct order.
A job (here simplify_network
) will display its attributes and normally some logs below this block:
rule simplify_network:
input: resources/test/networks/base_extended.nc, resources/test/regions_onshore.geojson, resources/test/regions_offshore.geojson
output: resources/test/networks/base_s.nc, resources/test/regions_onshore_base_s.geojson, resources/test/regions_offshore_base_s.geojson, resources/test/busmap_base_s.csv
log: logs/test/simplify_network.log
jobid: 10
benchmark: benchmarks/test/simplify_network_b
reason: Forced execution
resources: tmpdir=<TBD>, mem_mb=12000, mem_mib=11445
Once the whole worktree is finished, it should state so in the terminal.
You will notice that many intermediate stages are saved, namely the outputs of each individual snakemake
rule.
You can produce any output file occurring in the Snakefile
by running
snakemake -call <output file>
For example, you can explore the evolution of the PyPSA networks by running
snakemake resources/networks/base.nc -call --configfile config/test/config.electricity.yaml
snakemake resources/networks/base_s.nc -call --configfile config/test/config.electricity.yaml
snakemake resources/networks/base_s_6.nc -call --configfile config/test/config.electricity.yaml
snakemake resources/networks/base_s_6_elec_lcopt_.nc -call --configfile config/test/config.electricity.yaml
To run all combinations of wildcard values provided in the config/config.yaml
under scenario:
,
you can use the collection rule solve_elec_networks
.
snakemake -call solve_elec_networks --configfile config/test/config.electricity.yaml
If you now feel confident and want to tackle runs with larger temporal and
spatial scope, clean-up the repository and after modifying the config/config.yaml
file
target the collection rule solve_elec_networks
again without providing the test
configuration file.
snakemake -call purge
snakemake -call solve_elec_networks
Note
It is good practice to perform a dry-run using the option -n, before you commit to a run:
snakemake -call solve_elec_networks -n
How to analyse results?#
The solved networks can be analysed just like any other PyPSA network (e.g. in Jupyter Notebooks).
import pypsa
n = pypsa.Network("results/networks/base_s_6_elec_lcopt_.nc")
For inspiration, read the examples section in the PyPSA documentation.