Using pre defined Environments

The function define in this module is the easiest and most convenient ways to create a valid grid2op.Environment.Environment.

To get started with such an environment, you can simply do:

>>> import grid2op
>>> env = grid2op.make()

You can consult the different notebooks in the getting_stared directory of this package for more information on how to use it.

Created Environment should behave exactly like a gym environment. If you notice any unwanted behavior, please address an issue in the official grid2op repository: Grid2Op

The environment created with this method should be fully compatible with the gym framework: if you are developing a new algorithm of “Reinforcement Learning” and you used the openai gym framework to do so, you can port your code in a few minutes (basically this consists in adapting the input and output dimension of your BaseAgent) and make it work with a Grid2Op environment. An example of such modifications is exposed in the getting_started/ notebooks.

grid2op.MakeEnv.make(name_env='case14_realistic', **kwargs)[source]

This function is a shortcut to rapidly create some (pre defined) environments within the grid2op Framework.

For now, only the environment corresponding to the IEEE “case14” powergrid, with some pre defined chronics is available.

Other environments, with different powergrids will be made available in the future.

It mimic the gym.make function.

Parameters
  • name_env (str) – Name of the environment to create.

  • param (grid2op.Parameters.Parameters, optional) – Type of parameters used for the Environment. Parameters defines how the powergrid problem is cast into an markov decision process, and some internal

  • backend (grid2op.Backend.Backend, optional) – The backend to use for the computation. If provided, it must be an instance of grid2op.Backend.Backend.

  • action_class (type, optional) – Type of BaseAction the BaseAgent will be able to perform. If provided, it must be a subclass of grid2op.BaseAction.BaseAction

  • observation_class (type, optional) – Type of BaseObservation the BaseAgent will receive. If provided, It must be a subclass of grid2op.BaseAction.BaseObservation

  • reward_class (type, optional) – Type of reward signal the BaseAgent will receive. If provided, It must be a subclass of grid2op.BaseReward.BaseReward

  • gamerules_class (type, optional) – Type of “Rules” the BaseAgent need to comply with. Rules are here to model some operational constraints. If provided, It must be a subclass of grid2op.RulesChecker.BaseRules

  • grid_path (str, optional) – The path where the powergrid is located. If provided it must be a string, and point to a valid file present on the hard drive.

  • data_feeding_kwargs (dict, optional) – Dictionnary that is used to build the data_feeding (chronics) objects.

  • chronics_class (type, optional) – The type of chronics that represents the dynamics of the Environment created. Usually they come from different folders.

  • data_feeding (type, optional) – The type of chronics handler you want to use.

  • chronics_path (str) – Path where to look for the chronics dataset.

  • volagecontroler_class (type, optional) – The type of grid2op.VoltageControler.VoltageControler to use, it defaults to

  • other_rewards (dict, optional) – Dictionnary with other rewards we might want to look at at during training. It is given as a dictionnary with keys the name of the reward, and the values a class representing the new variables.

Returns

env – The created environment.

Return type

grid2op.Environment.Environment

grid2op.MakeEnv.make2(dataset_path='/', **kwargs)[source]

This function is a shortcut to rapidly create environments within the grid2op Framework.

It mimic the gym.make function.

Parameters
  • dataset_path (str) – Path to the dataset folder

  • param (grid2op.Parameters.Parameters, optional) – Type of parameters used for the Environment. Parameters defines how the powergrid problem is cast into an markov decision process, and some internal

  • backend (grid2op.Backend.Backend, optional) – The backend to use for the computation. If provided, it must be an instance of grid2op.Backend.Backend.

  • action_class (type, optional) – Type of BaseAction the BaseAgent will be able to perform. If provided, it must be a subclass of grid2op.BaseAction.BaseAction

  • observation_class (type, optional) – Type of BaseObservation the BaseAgent will receive. If provided, It must be a subclass of grid2op.BaseAction.BaseObservation

  • reward_class (type, optional) – Type of reward signal the BaseAgent will receive. If provided, It must be a subclass of grid2op.BaseReward.BaseReward

  • gamerules_class (type, optional) – Type of “Rules” the BaseAgent need to comply with. Rules are here to model some operational constraints. If provided, It must be a subclass of grid2op.RulesChecker.BaseRules

  • data_feeding_kwargs (dict, optional) – Dictionnary that is used to build the data_feeding (chronics) objects.

  • chronics_class (type, optional) – The type of chronics that represents the dynamics of the Environment created. Usually they come from different folders.

  • data_feeding (type, optional) – The type of chronics handler you want to use.

  • volagecontroler_class (type, optional) – The type of grid2op.VoltageControler.VoltageControler to use, it defaults to

Returns

env – The created environment.

Return type

grid2op.Environment.Environment

If you still can’t find what you’re looking for, try in one of the following pages: