{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "72380b9b", "metadata": {}, "source": [ "# Using _egobox_ optimizer _Egor_" ] }, { "attachments": {}, "cell_type": "markdown", "id": "31022791", "metadata": {}, "source": [ "## Installation" ] }, { "cell_type": "code", "execution_count": 1, "id": "1504e619-5775-42d3-8f48-7339272303ec", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: egobox in d:\\rlafage\\miniconda3\\lib\\site-packages (0.21.1)\n", "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install egobox" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4c2757f5", "metadata": {}, "source": [ "We import _egobox_ as _egx_ for short." ] }, { "cell_type": "code", "execution_count": 2, "id": "0edaf00f", "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import egobox as egx" ] }, { "cell_type": "markdown", "id": "6c8c8c84", "metadata": {}, "source": [ "You may setup the logging level to get optimization progress during the execution" ] }, { "cell_type": "code", "execution_count": 3, "id": "af2d82be", "metadata": {}, "outputs": [], "source": [ "# To display optimization information (none by default)\n", "# import logging\n", "# logging.basicConfig(level=logging.INFO)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "22997c39", "metadata": {}, "source": [ "## Example 1 : Continuous optimization" ] }, { "attachments": {}, "cell_type": "markdown", "id": "faae2555", "metadata": {}, "source": [ "### Test functions" ] }, { "cell_type": "code", "execution_count": 4, "id": "7f6da807", "metadata": {}, "outputs": [], "source": [ "xspecs_xsinx = egx.to_specs([[0., 25.]])\n", "n_cstr_xsinx = 0\n", "\n", "def xsinx(x: np.ndarray) -> np.ndarray:\n", " x = np.atleast_2d(x)\n", " y = (x - 3.5) * np.sin((x - 3.5) / (np.pi))\n", " return y" ] }, { "cell_type": "code", "execution_count": 5, "id": "4c436437", "metadata": {}, "outputs": [], "source": [ "xspecs_g24 = egx.to_specs([[0., 3.], [0., 4.]])\n", "n_cstr_g24 = 2\n", "\n", "# Objective\n", "def G24(point):\n", " \"\"\"\n", " Function g24\n", " 1 global optimum y_opt = -5.5080 at x_opt =(2.3295, 3.1785)\n", " \"\"\"\n", " p = np.atleast_2d(point)\n", " return - p[:, 0] - p[:, 1]\n", "\n", "# Constraints < 0\n", "def G24_c1(point):\n", " p = np.atleast_2d(point)\n", " return (- 2.0 * p[:, 0] ** 4.0\n", " + 8.0 * p[:, 0] ** 3.0 \n", " - 8.0 * p[:, 0] ** 2.0 \n", " + p[:, 1] - 2.0)\n", "\n", "def G24_c2(point):\n", " p = np.atleast_2d(point)\n", " return (-4.0 * p[:, 0] ** 4.0\n", " + 32.0 * p[:, 0] ** 3.0\n", " - 88.0 * p[:, 0] ** 2.0\n", " + 96.0 * p[:, 0]\n", " + p[:, 1] - 36.0)\n", "\n", "# Grouped evaluation\n", "def g24(point):\n", " p = np.atleast_2d(point)\n", " return np.array([G24(p), G24_c1(p), G24_c2(p)]).T\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "45641636", "metadata": {}, "source": [ "### Continuous optimization" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "egor = egx.Egor(xspecs_xsinx, n_cstr=n_cstr_xsinx) # see help(egor) for options" ] }, { "cell_type": "code", "execution_count": 7, "id": "c8942031", "metadata": {}, "outputs": [], "source": [ "egor = egx.Egor(xspecs_g24, \n", " n_doe=10, \n", " n_cstr=n_cstr_g24, \n", " cstr_tol=[1e-3, 1e-3],\n", " infill_strategy=egx.InfillStrategy.WB2,\n", " target=-5.5,\n", " trego=True,\n", " # outdir=\"./out\",\n", " # hot_start=True\n", " ) \n", "\n", "# Specify regression and/or correlation models used to build the surrogates of objective and constraints\n", "#egor = egx.Egor(g24, xlimits_g24, n_cstr=n_cstr_g24, n_doe=10,\n", "# regr_spec=egx.RegressionSpec.LINEAR,\n", "# corr_spec=egx.CorrelationSpec.MATERN32 | egx.CorrelationSpec.MATERN52) " ] }, { "cell_type": "code", "execution_count": 8, "id": "c12b8e9d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Optimization f=[-5.50793692e+00 -1.35143852e-04 -5.26158286e-05] at [2.32952661 3.17841031]\n", "Optimization history: \n", "Inputs = [[1.55335241 0.36609899]\n", " [0.48433871 1.59588025]\n", " [2.93418005 2.74753497]\n", " [2.58045769 1.77813431]\n", " [1.06805821 2.89510289]\n", " [1.48471388 3.81690289]\n", " [2.37071614 1.10296966]\n", " [0.82720969 0.44451136]\n", " [2.08425444 3.41673721]\n", " [0.08617534 2.39265705]\n", " [2.24213008 3.99999996]\n", " [2.47071614 1.20296966]\n", " [2.58182726 1.31408077]\n", " [2.63287837 1.43753756]\n", " [2.32952661 3.17841031]]\n", "Outputs = [[-1.91945140e+00 -2.59662097e+00 -2.19714000e+00]\n", " [-2.08021896e+00 -1.48190609e+00 -5.13533364e+00]\n", " [-5.68171502e+00 -1.42792021e+01 2.68270603e+00]\n", " [-4.35859200e+00 -4.70895411e+00 1.94930315e-02]\n", " [-3.96316111e+00 -1.08641233e+00 2.82595019e+00]\n", " [-5.30161677e+00 6.46292378e-01 1.65905814e+00]\n", " [-3.47368580e+00 -2.44182980e+00 -1.87313519e+00]\n", " [-1.27172105e+00 -3.43784549e+00 -1.19300768e-01]\n", " [-5.50099165e+00 1.35506108e+00 -5.26673877e-01]\n", " [-2.47883239e+00 3.38256872e-01 -2.59677570e+01]\n", " [-6.24213005e+00 1.41054706e+00 4.55267296e-01]\n", " [-3.67368580e+00 -3.50219615e+00 -1.22082043e+00]\n", " [-3.89590802e+00 -5.19899351e+00 -4.36126757e-01]\n", " [-4.07041593e+00 -6.11551899e+00 1.04566960e-04]\n", " [-5.50793692e+00 -1.35143852e-04 -5.26158286e-05]]\n" ] } ], "source": [ "res = egor.minimize(g24, max_iters=30)\n", "print(f\"Optimization f={res.y_opt} at {res.x_opt}\")\n", "print(\"Optimization history: \")\n", "print(f\"Inputs = {res.x_doe}\")\n", "print(f\"Outputs = {res.y_doe}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Egor as a service: ask-and-tell interface" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When the user needs to be in control of the optimization loop, `Egor` can be used as a service. \n", "\n", "For instance with the `xsinx` objective function, we can do:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Optimization f=[-15.12510279] at [18.9344756]\n" ] } ], "source": [ "xlimits = egx.to_specs([[0.0, 25.0]])\n", "egor = egx.Egor(xlimits, seed=42) \n", "\n", "# initial doe\n", "x_doe = egx.lhs(xlimits, 3, seed=42)\n", "y_doe = xsinx(x_doe)\n", "for _ in range(10): # run for 10 iterations\n", " x = egor.suggest(x_doe, y_doe) # ask for best location\n", " x_doe = np.concatenate((x_doe, x))\n", " y_doe = np.concatenate((y_doe, xsinx(x))) \n", "res = egor.get_result(x_doe, y_doe)\n", "print(f\"Optimization f={res.y_opt} at {res.x_opt}\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "fb8ddd3d", "metadata": {}, "source": [ "## Example 2 : Mixed-integer optimization" ] }, { "attachments": {}, "cell_type": "markdown", "id": "d46259b3", "metadata": {}, "source": [ "### Test function" ] }, { "cell_type": "code", "execution_count": 10, "id": "6948efc1", "metadata": {}, "outputs": [], "source": [ "xspecs_mixint_xsinx = [egx.XSpec(egx.XType.INT, [0, 25])]\n", "n_cstr_mixint_xsinx = 0\n", "\n", "def mixint_xsinx(x: np.ndarray) -> np.ndarray:\n", " x = np.atleast_2d(x)\n", " if (np.abs(np.linalg.norm(np.floor(x))-np.linalg.norm(x))< 1e-8):\n", " y = (x - 3.5) * np.sin((x - 3.5) / (np.pi))\n", " else:\n", " raise ValueError(f\"Bad input: mixint_xsinx accepts integer only, got {x}\")\n", " return y" ] }, { "attachments": {}, "cell_type": "markdown", "id": "67faa229", "metadata": {}, "source": [ "### Mixed-integer optimization with _Egor_" ] }, { "cell_type": "code", "execution_count": 11, "id": "928d1f38", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Optimization f=[-15.12161154] at [19.]\n", "Optimization history: \n", "Inputs = [[22.]\n", " [ 1.]\n", " [ 9.]\n", " [25.]\n", " [21.]\n", " [20.]\n", " [19.]]\n", "Outputs = [[ -7.10960014]\n", " [ 1.78601478]\n", " [ 5.41123083]\n", " [ 11.42919546]\n", " [-11.44370682]\n", " [-14.15453288]\n", " [-15.12161154]]\n" ] } ], "source": [ "egor = egx.Egor(xspecs_mixint_xsinx, \n", " n_doe=3, \n", " infill_strategy=egx.InfillStrategy.EI,\n", " target=-15.12,\n", " ) # see help(egor) for options\n", "res = egor.minimize(mixint_xsinx, max_iters=30)\n", "print(f\"Optimization f={res.y_opt} at {res.x_opt}\")\n", "print(\"Optimization history: \")\n", "print(f\"Inputs = {res.x_doe}\")\n", "print(f\"Outputs = {res.y_doe}\")" ] }, { "attachments": {}, "cell_type": "markdown", "id": "b9747211", "metadata": {}, "source": [ "## Example 3 : More mixed-integer optimization" ] }, { "cell_type": "markdown", "id": "0fe3a862", "metadata": {}, "source": [ "In the following example we see we can have other special integer type cases, where a component of x can take one value out of a list of ordered values (ORD type) or being like an enum value (ENUM type). Those types differ by the processing related to the continuous relaxation made behind the scene:\n", "* For INT type, resulting float is rounded to the closest int value,\n", "* For ORD type, resulting float is cast to closest value among the given valid ones,\n", "* For ENUM type, one hot encoding is performed to give the resulting value. " ] }, { "cell_type": "markdown", "id": "9c7d3511", "metadata": {}, "source": [ "### Test function" ] }, { "cell_type": "code", "execution_count": 12, "id": "f1615d5c", "metadata": {}, "outputs": [], "source": [ "# Objective function which takes [FLOAT, ENUM1, ENUM2, ORD] as input\n", "# Note that ENUM values are passed as indice value eg either 0, 1 or 2 for a 3-sized enum \n", "def mixobj(X):\n", " # float\n", " x1 = X[:, 0]\n", " # ENUM 1\n", " c1 = X[:, 1]\n", " x2 = c1 == 0\n", " x3 = c1 == 1\n", " x4 = c1 == 2\n", " # ENUM 2\n", " c2 = X[:, 2]\n", " x5 = c2 == 0\n", " x6 = c2 == 1\n", " # int\n", " i = X[:, 3]\n", "\n", " y = (x2 + 2 * x3 + 3 * x4) * x5 * x1 + (x2 + 2 * x3 + 3 * x4) * x6 * 0.95 * x1 + i\n", " return y.reshape(-1, 1)" ] }, { "cell_type": "markdown", "id": "fa3c4223", "metadata": {}, "source": [ "### Mixed-integer optimization with _Egor_" ] }, { "cell_type": "code", "execution_count": 13, "id": "d14fff89", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Optimization f=[-14.25] at [-5. 2. 1. 0.]\n", "Optimization history: \n", "Inputs = [[ 0.69939824 0. 0. 0. ]\n", " [ 4.84411847 1. 0. 0. ]\n", " [-4.75038813 1. 0. 2. ]\n", " [-1.81967258 2. 1. 2. ]\n", " [ 2.46052467 0. 0. 2. ]\n", " [-2.82859054 0. 0. 2. ]\n", " [ 2.5012666 2. 1. 0. ]\n", " [-0.6935668 2. 1. 3. ]\n", " [-4.77212695 2. 1. 0. ]\n", " [-5. 2. 1. 0. ]\n", " [-5. 2. 1. 0. ]\n", " [-5. 2. 1. 0. ]\n", " [-5. 2. 1. 0. ]\n", " [-5. 2. 1. 0. ]\n", " [-5. 2. 1. 0. ]\n", " [-5. 2. 1. 0. ]\n", " [-5. 2. 1. 0. ]\n", " [-5. 2. 1. 0. ]]\n", "Outputs = [[ 0.69939824]\n", " [ 9.68823694]\n", " [ -7.50077625]\n", " [ -3.18606686]\n", " [ 4.46052467]\n", " [ -0.82859054]\n", " [ 7.12860981]\n", " [ 1.02333461]\n", " [-13.60056181]\n", " [-14.25 ]\n", " [-14.25 ]\n", " [-14.25 ]\n", " [-14.25 ]\n", " [-14.25 ]\n", " [-14.25 ]\n", " [-14.25 ]\n", " [-14.25 ]\n", " [-14.25 ]]\n" ] } ], "source": [ "xtypes = [\n", " egx.XSpec(egx.XType.FLOAT, [-5.0, 5.0]),\n", " egx.XSpec(egx.XType.ENUM, tags=[\"blue\", \"red\", \"green\"]),\n", " egx.XSpec(egx.XType.ENUM, xlimits=[2]),\n", " egx.XSpec(egx.XType.ORD, [0, 2, 3]),\n", "]\n", "egor = egx.Egor(xtypes, seed=42)\n", "res = egor.minimize(mixobj, max_iters=10)\n", "print(f\"Optimization f={res.y_opt} at {res.x_opt}\")\n", "print(\"Optimization history: \")\n", "print(f\"Inputs = {res.x_doe}\")\n", "print(f\"Outputs = {res.y_doe}\")" ] }, { "cell_type": "markdown", "id": "705bf10d", "metadata": {}, "source": [ "Note that `x_opt` result contains indices for corresponding optional tags list hence the second component should be read as 0=\"red\", 1=\"green\", 2=\"blue\", while the third component was unamed 0 correspond to first enum value and 1 to the second one." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Usage" ] }, { "cell_type": "code", "execution_count": 14, "id": "b91f14f2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Help on Egor in module builtins object:\n", "\n", "class Egor(object)\n", " | Egor(xspecs, n_cstr=0, cstr_tol=None, n_start=20, n_doe=0, doe=None, regr_spec=Ellipsis, corr_spec=Ellipsis, infill_strategy=Ellipsis, q_points=1, par_infill_strategy=Ellipsis, infill_optimizer=Ellipsis, kpls_dim=None, trego=False, n_clusters=1, n_optmod=1, target=Ellipsis, outdir=None, warm_start=False, seed=None)\n", " | \n", " | Optimizer constructor\n", " | \n", " | fun: array[n, nx]) -> array[n, ny]\n", " | the function to be minimized\n", " | fun(x) = [obj(x), cstr_1(x), ... cstr_k(x)] where\n", " | obj is the objective function [n, nx] -> [n, 1]\n", " | cstr_i is the ith constraint function [n, nx] -> [n, 1]\n", " | an k the number of constraints (n_cstr)\n", " | hence ny = 1 (obj) + k (cstrs)\n", " | cstr functions are expected be negative (<=0) at the optimum.\n", " | \n", " | n_cstr (int):\n", " | the number of constraint functions.\n", " | \n", " | cstr_tol (list(n_cstr,)):\n", " | List of tolerances for constraints to be satisfied (cstr < tol), list size should be equal to n_cstr.\n", " | None by default means zero tolerances.\n", " | \n", " | xspecs (list(XSpec)) where XSpec(xtype=FLOAT|INT|ORD|ENUM, xlimits=[] or tags=[strings]):\n", " | Specifications of the nx components of the input x (eg. len(xspecs) == nx)\n", " | Depending on the x type we get the following for xlimits:\n", " | * when FLOAT: xlimits is [float lower_bound, float upper_bound],\n", " | * when INT: xlimits is [int lower_bound, int upper_bound],\n", " | * when ORD: xlimits is [float_1, float_2, ..., float_n],\n", " | * when ENUM: xlimits is just the int size of the enumeration otherwise a list of tags is specified\n", " | (eg xlimits=[3] or tags=[\"red\", \"green\", \"blue\"], tags are there for documention purpose but\n", " | tags specific values themselves are not used only indices in the enum are used hence\n", " | we can just specify the size of the enum, xlimits=[3]),\n", " | \n", " | n_start (int > 0):\n", " | Number of runs of infill strategy optimizations (best result taken)\n", " | \n", " | n_doe (int >= 0):\n", " | Number of samples of initial LHS sampling (used when DOE not provided by the user).\n", " | When 0 a number of points is computed automatically regarding the number of input variables\n", " | of the function under optimization.\n", " | \n", " | doe (array[ns, nt]):\n", " | Initial DOE containing ns samples:\n", " | either nt = nx then only x are specified and ns evals are done to get y doe values,\n", " | or nt = nx + ny then x = doe[:, :nx] and y = doe[:, nx:] are specified \n", " | \n", " | regr_spec (RegressionSpec flags, an int in [1, 7]):\n", " | Specification of regression models used in gaussian processes.\n", " | Can be RegressionSpec.CONSTANT (1), RegressionSpec.LINEAR (2), RegressionSpec.QUADRATIC (4) or\n", " | any bit-wise union of these values (e.g. RegressionSpec.CONSTANT | RegressionSpec.LINEAR)\n", " | \n", " | corr_spec (CorrelationSpec flags, an int in [1, 15]):\n", " | Specification of correlation models used in gaussian processes.\n", " | Can be CorrelationSpec.SQUARED_EXPONENTIAL (1), CorrelationSpec.ABSOLUTE_EXPONENTIAL (2),\n", " | CorrelationSpec.MATERN32 (4), CorrelationSpec.MATERN52 (8) or\n", " | any bit-wise union of these values (e.g. CorrelationSpec.MATERN32 | CorrelationSpec.MATERN52)\n", " | \n", " | infill_strategy (InfillStrategy enum)\n", " | Infill criteria to decide best next promising point.\n", " | Can be either InfillStrategy.EI, InfillStrategy.WB2 or InfillStrategy.WB2S.\n", " | \n", " | q_points (int > 0):\n", " | Number of points to be evaluated to allow parallel evaluation of the function under optimization.\n", " | \n", " | par_infill_strategy (ParInfillStrategy enum)\n", " | Parallel infill criteria (aka qEI) to get virtual next promising points in order to allow\n", " | q parallel evaluations of the function under optimization (only used when q_points > 1)\n", " | Can be either ParInfillStrategy.KB (Kriging Believer),\n", " | ParInfillStrategy.KBLB (KB Lower Bound), ParInfillStrategy.KBUB (KB Upper Bound),\n", " | ParInfillStrategy.CLMIN (Constant Liar Minimum)\n", " | \n", " | infill_optimizer (InfillOptimizer enum)\n", " | Internal optimizer used to optimize infill criteria.\n", " | Can be either InfillOptimizer.COBYLA or InfillOptimizer.SLSQP\n", " | \n", " | kpls_dim (0 < int < nx)\n", " | Number of components to be used when PLS projection is used (a.k.a KPLS method).\n", " | This is used to address high-dimensional problems typically when nx > 9.\n", " | \n", " | trego (bool)\n", " | When true, TREGO algorithm is used, otherwise classic EGO algorithm is used.\n", " | \n", " | n_clusters (int >= 0)\n", " | Number of clusters used by the mixture of surrogate experts.\n", " | When set to 0, the number of cluster is determined automatically and refreshed every\n", " | 10-points addition (should say 'tentative addition' because addition may fail for some points\n", " | but it is counted anyway).\n", " | \n", " | n_optmod (int >= 1)\n", " | Number of iterations between two surrogate models training (hypermarameters optimization)\n", " | otherwise previous hyperparameters are re-used. The default value is 1 meaning surrogates are\n", " | properly trained at each iteration. The value is used as a modulo of iteration number. For instance,\n", " | with a value of 3, after the first iteration surrogate are trained at iteration 3, 6, 9, etc. \n", " | \n", " | target (float)\n", " | Known optimum used as stopping criterion.\n", " | \n", " | outdir (String)\n", " | Directory to write optimization history and used as search path for warm start doe\n", " | \n", " | warm_start (bool)\n", " | Start by loading initial doe from directory\n", " | \n", " | seed (int >= 0)\n", " | Random generator seed to allow computation reproducibility.\n", " | \n", " | Methods defined here:\n", " | \n", " | get_result(self, /, x_doe, y_doe)\n", " | This function gives the best result given inputs and outputs\n", " | of the function (objective wrt constraints) under minimization.\n", " | \n", " | # Parameters\n", " | x_doe (array[ns, nx]): ns samples where function has been evaluated\n", " | y_doe (array[ns, 1 + n_cstr]): ns values of objective and constraints\n", " | \n", " | # Returns\n", " | optimization result\n", " | x_opt (array[1, nx]): x value where fun is at its minimum subject to constraints\n", " | y_opt (array[1, nx]): fun(x_opt)\n", " | \n", " | get_result_index(self, /, y_doe)\n", " | This function gives the best evaluation index given the outputs\n", " | of the function (objective wrt constraints) under minimization.\n", " | \n", " | # Parameters\n", " | y_doe (array[ns, 1 + n_cstr]): ns values of objective and constraints\n", " | \n", " | # Returns\n", " | index in y_doe of the best evaluation\n", " | \n", " | minimize(self, /, fun, max_iters=20)\n", " | This function finds the minimum of a given function `fun`\n", " | \n", " | # Parameters\n", " | max_iters:\n", " | the iteration budget, number of fun calls is n_doe + q_points * max_iters.\n", " | \n", " | # Returns\n", " | optimization result\n", " | x_opt (array[1, nx]): x value where fun is at its minimum subject to constraints\n", " | y_opt (array[1, nx]): fun(x_opt)\n", " | \n", " | suggest(self, /, x_doe, y_doe)\n", " | This function gives the next best location where to evaluate the function\n", " | under optimization wrt to previous evaluations.\n", " | The function returns several point when multi point qEI strategy is used.\n", " | \n", " | # Parameters\n", " | x_doe (array[ns, nx]): ns samples where function has been evaluated\n", " | y_doe (array[ns, 1 + n_cstr]): ns values of objecctive and constraints\n", " | \n", " | \n", " | # Returns\n", " | (array[1, nx]): suggested location where to evaluate objective and constraints\n", " | \n", " | ----------------------------------------------------------------------\n", " | Static methods defined here:\n", " | \n", " | __new__(*args, **kwargs) from builtins.type\n", " | Create and return a new object. See help(type) for accurate signature.\n", "\n" ] } ], "source": [ "help(egor)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.5" } }, "nbformat": 4, "nbformat_minor": 5 }