{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "tCOWitsAS1EE" }, "source": [ "# Parallel Evaluation in JAX\n", "\n", "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/06-parallelism.ipynb)\n", "\n", "*Authors: Vladimir Mikulik & Roman Ring*\n", "\n", "In this section we will discuss the facilities built into JAX for single-program, multiple-data (SPMD) code.\n", "\n", "SPMD refers to a parallelism technique where the same computation (e.g., the forward pass of a neural net) is run on different input data (e.g., different inputs in a batch) in parallel on different devices (e.g., several TPUs).\n", "\n", "Conceptually, this is not very different from vectorisation, where the same operations occur in parallel in different parts of memory on the same device. We have already seen that vectorisation is supported in JAX as a program transformation, `jax.vmap`. JAX supports device parallelism analogously, using `jax.pmap` to transform a function written for one device into a function that runs in parallel on multiple devices. This colab will teach you all about it." ] }, { "cell_type": "markdown", "metadata": { "id": "7mCgBzix2fd3" }, "source": [ "## Colab TPU Setup\n", "\n", "If you're running this code in Google Colab, be sure to choose *Runtime*→*Change Runtime Type* and choose **TPU** from the Hardware Accelerator menu.\n", "\n", "Once this is done, you can run the following to set up the Colab TPU for use with JAX:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "id": "hn7HtC2QS92b" }, "outputs": [], "source": [ "import jax.tools.colab_tpu\n", "jax.tools.colab_tpu.setup_tpu()" ] }, { "cell_type": "markdown", "metadata": { "id": "gN6VbcdRTcdE" }, "source": [ "Next run the following to see the TPU devices you have available:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "tqbpCcqY3Cn7", "outputId": "1fb88cf7-35f7-4565-f370-51586213b988" }, "outputs": [ { "data": { "text/plain": [ "[TpuDevice(id=0, host_id=0, coords=(0,0,0), core_on_chip=0),\n", " TpuDevice(id=1, host_id=0, coords=(0,0,0), core_on_chip=1),\n", " TpuDevice(id=2, host_id=0, coords=(1,0,0), core_on_chip=0),\n", " TpuDevice(id=3, host_id=0, coords=(1,0,0), core_on_chip=1),\n", " TpuDevice(id=4, host_id=0, coords=(0,1,0), core_on_chip=0),\n", " TpuDevice(id=5, host_id=0, coords=(0,1,0), core_on_chip=1),\n", " TpuDevice(id=6, host_id=0, coords=(1,1,0), core_on_chip=0),\n", " TpuDevice(id=7, host_id=0, coords=(1,1,0), core_on_chip=1)]" ] }, "execution_count": 2, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "import jax\n", "jax.devices()" ] }, { "cell_type": "markdown", "metadata": { "id": "4_EDa0Dlgtf8" }, "source": [ "## The basics\n", "\n", "The most basic use of `jax.pmap` is completely analogous to `jax.vmap`, so let's return to the convolution example from the [Vectorisation notebook](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/03-vectorization.ipynb)." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "IIQKBr-CgtD2", "outputId": "6e7f8755-fdfd-4cf9-e2b5-a10c5a870dd4" }, "outputs": [ { "data": { "text/plain": [ "DeviceArray([11., 20., 29.], dtype=float32)" ] }, "execution_count": 5, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "import jax.numpy as jnp\n", "\n", "x = np.arange(5)\n", "w = np.array([2., 3., 4.])\n", "\n", "def convolve(x, w):\n", " output = []\n", " for i in range(1, len(x)-1):\n", " output.append(jnp.dot(x[i-1:i+2], w))\n", " return jnp.array(output)\n", "\n", "convolve(x, w)" ] }, { "cell_type": "markdown", "metadata": { "id": "lqxz9NNJOQ9Z" }, "source": [ "Now, let's convert our `convolve` function into one that runs on entire batches of data. In anticipation of spreading the batch across several devices, we'll make the batch size equal to the number of devices:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "ll-hEa0jihzx", "outputId": "788be05a-10d4-4a05-8d9d-49d0083541ab" }, "outputs": [ { "data": { "text/plain": [ "array([[ 0, 1, 2, 3, 4],\n", " [ 5, 6, 7, 8, 9],\n", " [10, 11, 12, 13, 14],\n", " [15, 16, 17, 18, 19],\n", " [20, 21, 22, 23, 24],\n", " [25, 26, 27, 28, 29],\n", " [30, 31, 32, 33, 34],\n", " [35, 36, 37, 38, 39]])" ] }, "execution_count": 6, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "n_devices = jax.local_device_count() \n", "xs = np.arange(5 * n_devices).reshape(-1, 5)\n", "ws = np.stack([w] * n_devices)\n", "\n", "xs" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "mi-nysDWYbn4", "outputId": "2d115fc3-52f5-4a68-c3a7-115111a83657" }, "outputs": [ { "data": { "text/plain": [ "array([[2., 3., 4.],\n", " [2., 3., 4.],\n", " [2., 3., 4.],\n", " [2., 3., 4.],\n", " [2., 3., 4.],\n", " [2., 3., 4.],\n", " [2., 3., 4.],\n", " [2., 3., 4.]])" ] }, "execution_count": 7, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "ws" ] }, { "cell_type": "markdown", "metadata": { "id": "8kseIB09YWJw" }, "source": [ "As before, we can vectorise using `jax.vmap`:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "id": "TNb9HsFXYVOI", "outputId": "2e60e07a-6687-49ab-a455-60d2ec484363" }, "outputs": [ { "data": { "text/plain": [ "DeviceArray([[ 11., 20., 29.],\n", " [ 56., 65., 74.],\n", " [101., 110., 119.],\n", " [146., 155., 164.],\n", " [191., 200., 209.],\n", " [236., 245., 254.],\n", " [281., 290., 299.],\n", " [326., 335., 344.]], dtype=float32)" ] }, "execution_count": 8, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "jax.vmap(convolve)(xs, ws)" ] }, { "cell_type": "markdown", "metadata": { "id": "TDF1vzt_5GMC" }, "source": [ "To spread out the computation across multiple devices, just replace `jax.vmap` with `jax.pmap`:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "id": "KWoextrails4", "outputId": "bad1fbb7-226a-4538-e442-20ce0c1c8fad" }, "outputs": [ { "data": { "text/plain": [ "ShardedDeviceArray([[ 11., 20., 29.],\n", " [ 56., 65., 74.],\n", " [101., 110., 119.],\n", " [146., 155., 164.],\n", " [191., 200., 209.],\n", " [236., 245., 254.],\n", " [281., 290., 299.],\n", " [326., 335., 344.]], dtype=float32)" ] }, "execution_count": 9, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "jax.pmap(convolve)(xs, ws)" ] }, { "cell_type": "markdown", "metadata": { "id": "E69cVxQPksxe" }, "source": [ "Note that the parallelized `convolve` returns a `ShardedDeviceArray`. That is because the elements of this array are sharded across all of the devices used in the parallelism. If we were to run another parallel computation, the elements would stay on their respective devices, without incurring cross-device communication costs." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "id": "P9dUyk-ciquy", "outputId": "99ea4c6e-cff7-4611-e9e5-bf016fa9716c" }, "outputs": [ { "data": { "text/plain": [ "ShardedDeviceArray([[ 78., 138., 198.],\n", " [ 1188., 1383., 1578.],\n", " [ 3648., 3978., 4308.],\n", " [ 7458., 7923., 8388.],\n", " [12618., 13218., 13818.],\n", " [19128., 19863., 20598.],\n", " [26988., 27858., 28728.],\n", " [36198., 37203., 38208.]], dtype=float32)" ] }, "execution_count": 11, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "jax.pmap(convolve)(xs, jax.pmap(convolve)(xs, ws))" ] }, { "cell_type": "markdown", "metadata": { "id": "iuHqht-OYqca" }, "source": [ "The outputs of the inner `jax.pmap(convolve)` never left their devices when being fed into the outer `jax.pmap(convolve)`." ] }, { "cell_type": "markdown", "metadata": { "id": "vEFAJXN2q3dV" }, "source": [ "## Specifying `in_axes`\n", "\n", "Like with `vmap`, we can use `in_axes` to specify whether an argument to the parallelized function should be broadcast (`None`), or whether it should be split along a given axis. Note, however, that unlike `vmap`, only the leading axis (`0`) is supported by `pmap` at the time of writing this guide." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "id": "6Es5WVuRlXnB", "outputId": "7e9612ae-d6e0-4d79-a228-f0403fcf8237" }, "outputs": [ { "data": { "text/plain": [ "ShardedDeviceArray([[ 11., 20., 29.],\n", " [ 56., 65., 74.],\n", " [101., 110., 119.],\n", " [146., 155., 164.],\n", " [191., 200., 209.],\n", " [236., 245., 254.],\n", " [281., 290., 299.],\n", " [326., 335., 344.]], dtype=float32)" ] }, "execution_count": 12, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "jax.pmap(convolve, in_axes=(0, None))(xs, w)" ] }, { "cell_type": "markdown", "metadata": { "id": "EoN6drHDOlk4" }, "source": [ "Notice how we get equivalent output to what we observe above with `jax.pmap(convolve)(xs, ws)`, where we manually replicated `w` when creating `ws`. Here, it is replicated via broadcasting, by specifying it as `None` in `in_axes`." ] }, { "cell_type": "markdown", "metadata": { "id": "rRE8STSU5cjx" }, "source": [ "Keep in mind that when calling the transformed function, the size of the specified axis in arguments must not exceed the number of devices available to the host." ] }, { "cell_type": "markdown", "metadata": { "id": "0lZnqImd7G6U" }, "source": [ "## `pmap` and `jit`\n", "\n", "`jax.pmap` JIT-compiles the function given to it as part of its operation, so there is no need to additionally `jax.jit` it." ] }, { "cell_type": "markdown", "metadata": { "id": "1jZqk_2AwO4y" }, "source": [ "## Communication between devices\n", "\n", "The above is enough to perform simple parallel operations, e.g. batching a simple MLP forward pass across several devices. However, sometimes we need to pass information between the devices. For example, perhaps we are interested in normalizing the output of each device so they sum to 1.\n", "For that, we can use special [collective ops](https://jax.readthedocs.io/en/latest/jax.lax.html#parallel-operators) (such as the `jax.lax.p*` ops `psum`, `pmean`, `pmax`, ...). In order to use the collective ops we must specify the name of the `pmap`-ed axis through `axis_name` argument, and then refer to it when calling the op. Here's how to do that:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "id": "0nCxGwqmtd3w", "outputId": "6f9c93b0-51ed-40c5-ca5a-eacbaf40e686" }, "outputs": [ { "data": { "text/plain": [ "ShardedDeviceArray([[0.00816024, 0.01408451, 0.019437 ],\n", " [0.04154303, 0.04577465, 0.04959785],\n", " [0.07492582, 0.07746479, 0.07975871],\n", " [0.10830861, 0.10915492, 0.10991956],\n", " [0.14169139, 0.14084506, 0.14008042],\n", " [0.17507419, 0.17253521, 0.17024128],\n", " [0.20845698, 0.20422535, 0.20040214],\n", " [0.24183977, 0.23591548, 0.23056298]], dtype=float32)" ] }, "execution_count": 13, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "def normalized_convolution(x, w):\n", " output = []\n", " for i in range(1, len(x)-1):\n", " output.append(jnp.dot(x[i-1:i+2], w))\n", " output = jnp.array(output)\n", " return output / jax.lax.psum(output, axis_name='p')\n", "\n", "jax.pmap(normalized_convolution, axis_name='p')(xs, ws)" ] }, { "cell_type": "markdown", "metadata": { "id": "9ENYsJS42YVK" }, "source": [ "The `axis_name` is just a string label that allows collective operations like `jax.lax.psum` to refer to the axis bound by `jax.pmap`. It can be named anything you want -- in this case, `p`. This name is essentially invisible to anything but those functions, and those functions use it to know which axis to communicate across.\n", "\n", "`jax.vmap` also supports `axis_name`, which allows `jax.lax.p*` operations to be used in the vectorisation context in the same way they would be used in a `jax.pmap`:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "id": "nT61xAYJUqCW", "outputId": "e8831025-78a6-4a2b-a60a-3c77b35214ef" }, "outputs": [ { "data": { "text/plain": [ "DeviceArray([[0.00816024, 0.01408451, 0.019437 ],\n", " [0.04154303, 0.04577465, 0.04959785],\n", " [0.07492582, 0.07746479, 0.07975871],\n", " [0.10830861, 0.10915492, 0.10991956],\n", " [0.14169139, 0.14084506, 0.14008042],\n", " [0.17507419, 0.17253521, 0.17024128],\n", " [0.20845698, 0.20422535, 0.20040214],\n", " [0.24183977, 0.23591548, 0.23056298]], dtype=float32)" ] }, "execution_count": 14, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "jax.vmap(normalized_convolution, axis_name='p')(xs, ws)" ] }, { "cell_type": "markdown", "metadata": { "id": "JSK-9dbWWV2O" }, "source": [ "Note that `normalized_convolution` will no longer work without being transformed by `jax.pmap` or `jax.vmap`, because `jax.lax.psum` expects there to be a named axis (`'p'`, in this case), and those two transformations are the only way to bind one.\n", "\n", "## Nesting `jax.pmap` and `jax.vmap`\n", "\n", "The reason we specify `axis_name` as a string is so we can use collective operations when nesting `jax.pmap` and `jax.vmap`. For example:\n", "\n", "```python\n", "jax.vmap(jax.pmap(f, axis_name='i'), axis_name='j')\n", "```\n", "\n", "A `jax.lax.psum(..., axis_name='i')` in `f` would refer only to the pmapped axis, since they share the `axis_name`. \n", "\n", "In general, `jax.pmap` and `jax.vmap` can be nested in any order, and with themselves (so you can have a `pmap` within another `pmap`, for instance)." ] }, { "cell_type": "markdown", "metadata": { "id": "WzQHxnHkCxej" }, "source": [ "## Example\n", "\n", "Here's an example of a regression training loop with data parallelism, where each batch is split into sub-batches which are evaluated on separate devices.\n", "\n", "There are two places to pay attention to:\n", "* the `update()` function\n", "* the replication of parameters and splitting of data across devices.\n", "\n", "If this example is too confusing, you can find the same example, but without parallelism, in the next notebook, [State in JAX](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/07-state.ipynb). Once that example makes sense, you can compare the differences to understand how parallelism changes the picture." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "id": "cI8xQqzRrc-4" }, "outputs": [], "source": [ "from typing import NamedTuple, Tuple\n", "import functools\n", "\n", "class Params(NamedTuple):\n", " weight: jnp.ndarray\n", " bias: jnp.ndarray\n", "\n", "\n", "def init(rng) -> Params:\n", " \"\"\"Returns the initial model params.\"\"\"\n", " weights_key, bias_key = jax.random.split(rng)\n", " weight = jax.random.normal(weights_key, ())\n", " bias = jax.random.normal(bias_key, ())\n", " return Params(weight, bias)\n", "\n", "\n", "def loss_fn(params: Params, xs: jnp.ndarray, ys: jnp.ndarray) -> jnp.ndarray:\n", " \"\"\"Computes the least squares error of the model's predictions on x against y.\"\"\"\n", " pred = params.weight * xs + params.bias\n", " return jnp.mean((pred - ys) ** 2)\n", "\n", "LEARNING_RATE = 0.005\n", "\n", "# So far, the code is identical to the single-device case. Here's what's new:\n", "\n", "\n", "# Remember that the `axis_name` is just an arbitrary string label used\n", "# to later tell `jax.lax.pmean` which axis to reduce over. Here, we call it\n", "# 'num_devices', but could have used anything, so long as `pmean` used the same.\n", "@functools.partial(jax.pmap, axis_name='num_devices')\n", "def update(params: Params, xs: jnp.ndarray, ys: jnp.ndarray) -> Tuple[Params, jnp.ndarray]:\n", " \"\"\"Performs one SGD update step on params using the given data.\"\"\"\n", "\n", " # Compute the gradients on the given minibatch (individually on each device).\n", " loss, grads = jax.value_and_grad(loss_fn)(params, xs, ys)\n", "\n", " # Combine the gradient across all devices (by taking their mean).\n", " grads = jax.lax.pmean(grads, axis_name='num_devices')\n", "\n", " # Also combine the loss. Unnecessary for the update, but useful for logging.\n", " loss = jax.lax.pmean(loss, axis_name='num_devices')\n", "\n", " # Each device performs its own update, but since we start with the same params\n", " # and synchronise gradients, the params stay in sync.\n", " new_params = jax.tree_multimap(\n", " lambda param, g: param - g * LEARNING_RATE, params, grads)\n", "\n", " return new_params, loss" ] }, { "cell_type": "markdown", "metadata": { "id": "RWce8YZ4Pcmf" }, "source": [ "Here's how `update()` works:\n", "\n", "Undecorated and without the `pmean`s, `update()` takes data tensors of shape `[batch, ...]`, computes the loss function on that batch and evaluates its gradients.\n", "\n", "We want to spread the `batch` dimension across all available devices. To do that, we add a new axis using `pmap`. The arguments to the decorated `update()` thus need to have shape `[num_devices, batch_per_device, ...]`. So, to call the new `update()`, we'll need to reshape data batches so that what used to be `batch` is reshaped to `[num_devices, batch_per_device]`. That's what `split()` does below. Additionally, we'll need to replicate our model parameters, adding the `num_devices` axis. This reshaping is how a pmapped function knows which devices to send which data.\n", "\n", "At some point during the update step, we need to combine the gradients computed by each device -- otherwise, the updates performed by each device would be different. That's why we use `jax.lax.pmean` to compute the mean across the `num_devices` axis, giving us the average gradient of the batch. That average gradient is what we use to compute the update.\n", "\n", "Aside on naming: here, we use `num_devices` for the `axis_name` for didactic clarity while introducing `jax.pmap`. However, in some sense that is tautologous: any axis introduced by a pmap will represent a number of devices. Therefore, it's common to see the axis be named something semantically meaningful, like `batch`, `data` (signifying data parallelism) or `model` (signifying model parallelism)." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "id": "_CTtLrsQ-0kK" }, "outputs": [], "source": [ "# Generate true data from y = w*x + b + noise\n", "true_w, true_b = 2, -1\n", "xs = np.random.normal(size=(128, 1))\n", "noise = 0.5 * np.random.normal(size=(128, 1))\n", "ys = xs * true_w + true_b + noise\n", "\n", "# Initialise parameters and replicate across devices.\n", "params = init(jax.random.PRNGKey(123))\n", "n_devices = jax.local_device_count()\n", "replicated_params = jax.tree_map(lambda x: jnp.array([x] * n_devices), params)" ] }, { "cell_type": "markdown", "metadata": { "id": "dmCMyLP9SV99" }, "source": [ "So far, we've just constructed arrays with an additional leading dimension. The params are all still all on the host (CPU). `pmap` will communicate them to the devices when `update()` is first called, and each copy will stay on its own device subsequently. You can tell because they are a DeviceArray, not a ShardedDeviceArray:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "id": "YSCgHguTSdGW", "outputId": "a8bf28df-3747-4d49-e340-b7696cf0c27d" }, "outputs": [ { "data": { "text/plain": [ "jax.interpreters.xla._DeviceArray" ] }, "execution_count": 19, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "type(replicated_params.weight)" ] }, { "cell_type": "markdown", "metadata": { "id": "90VtjPbeY-hD" }, "source": [ "The params will become a ShardedDeviceArray when they are returned by our pmapped `update()` (see further down)." ] }, { "cell_type": "markdown", "metadata": { "id": "eGVKxk1CV-m1" }, "source": [ "We do the same to the data:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "id": "vY61QJoFWCII", "outputId": "f436a15f-db97-44cc-df33-bbb4ff222987" }, "outputs": [ { "data": { "text/plain": [ "numpy.ndarray" ] }, "execution_count": 20, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "def split(arr):\n", " \"\"\"Splits the first axis of `arr` evenly across the number of devices.\"\"\"\n", " return arr.reshape(n_devices, arr.shape[0] // n_devices, *arr.shape[1:])\n", "\n", "# Reshape xs and ys for the pmapped `update()`.\n", "x_split = split(xs)\n", "y_split = split(ys)\n", "\n", "type(x_split)" ] }, { "cell_type": "markdown", "metadata": { "id": "RzfJ-oK5WERq" }, "source": [ "The data is just a reshaped vanilla NumPy array. Hence, it cannot be anywhere but on the host, as NumPy runs on CPU only. Since we never modify it, it will get sent to the device at each `update` call, like in a real pipeline where data is typically streamed from CPU to the device at each step." ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "id": "atOTi7EeSQw-", "outputId": "c8daf141-63c4-481f-afa5-684c5f7b698d" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "after first `update()`, `replicated_params.weight` is a \n", "after first `update()`, `loss` is a \n", "after first `update()`, `x_split` is a \n", "Step 0, loss: 0.228\n", "Step 100, loss: 0.228\n", "Step 200, loss: 0.228\n", "Step 300, loss: 0.228\n", "Step 400, loss: 0.228\n", "Step 500, loss: 0.228\n", "Step 600, loss: 0.228\n", "Step 700, loss: 0.228\n", "Step 800, loss: 0.228\n", "Step 900, loss: 0.228\n" ] } ], "source": [ "def type_after_update(name, obj):\n", " print(f\"after first `update()`, `{name}` is a\", type(obj))\n", "\n", "# Actual training loop.\n", "for i in range(1000):\n", "\n", " # This is where the params and data gets communicated to devices:\n", " replicated_params, loss = update(replicated_params, x_split, y_split)\n", "\n", " # The returned `replicated_params` and `loss` are now both ShardedDeviceArrays,\n", " # indicating that they're on the devices.\n", " # `x_split`, of course, remains a NumPy array on the host.\n", " if i == 0:\n", " type_after_update('replicated_params.weight', replicated_params.weight)\n", " type_after_update('loss', loss)\n", " type_after_update('x_split', x_split)\n", "\n", " if i % 100 == 0:\n", " # Note that loss is actually an array of shape [num_devices], with identical\n", " # entries, because each device returns its copy of the loss.\n", " # So, we take the first element to print it.\n", " print(f\"Step {i:3d}, loss: {loss[0]:.3f}\")\n", "\n", "\n", "# Plot results.\n", "\n", "# Like the loss, the leaves of params have an extra leading dimension,\n", "# so we take the params from the first device.\n", "params = jax.device_get(jax.tree_map(lambda x: x[0], replicated_params))" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "id": "rvVCACv9UZcF", "outputId": "5c472d0f-1236-401b-be55-86e3dc43875d" }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXIAAAD4CAYAAADxeG0DAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAgAElEQVR4nO3de3iU1bn38e8iDpBQMCqBLUEI1i0KghwCUlArYAEVMSoWqQWF+uJhK8UqJ9sth20lioqtWq27oO6CiIqNArJBBEXYogQJKCdRRCEqIBIOEiSH9f4xmZnMZGYyM5lkMjO/z3V5XTxrnsMalTsr93OvtYy1FhERiV8NYt0BERGpGQVyEZE4p0AuIhLnFMhFROKcArmISJw7JRYPbd68uc3KyorFo0VE4taGDRu+t9Zm+LbHJJBnZWWRn58fi0eLiMQtY8xX/tqVWhERiXMK5CIicU6BXEQkzsUkR+5PSUkJe/fu5cSJE7HuikRR48aNad26NQ6HI9ZdEUlY9SaQ7927l6ZNm5KVlYUxJtbdkSiw1nLw4EH27t1Lu3btYt0dkYRVbwL5iRMnFMQTjDGGM844gwMHDsS6KyIxl7exkJnLdvBNUTGt0lMZP7A9OV0zo3LvehPIAQXxBKT/piLOID759U8oLikDoLComMmvfwIQlWCul50iIrVs5rId7iDuUlxSxsxlO6JyfwXySowx/Pa3v3Ufl5aWkpGRweDBg8O6T1ZWFt9//31E52RlZdGpUyc6d+7MgAED+O6778J6dmVTp07l0UcfBeCBBx5gxYoVAc8tKCjgrbfech+/+eab5ObmRvxsEfH4pqg4rPZwKZBX0qRJEz799FOKi53/ct9++20yM6OTwwrHqlWr2Lx5M9nZ2Tz00ENen1lrKS8vD/ue06dP5/LLLw/4uW8gHzJkCJMmTQr7OSJSVav01LDaw6VA7uPKK69kyZIlAMyfP5/hw4e7P/vhhx/Iycmhc+fO9OrVi82bNwNw8OBBBgwYQMeOHbn11lupvOvS3Llz6dmzJ126dOG2226jrMz716tgLr30Uj7//HN2795N+/btGTlyJBdccAF79uxh5syZ9OjRg86dOzNlyhT3NX/+858599xzufjii9mxw/Nr2y233MJrr70GwPr16+nduzcXXnghPXv25PDhwzzwwAMsWLCALl26sGDBAl544QXuuusuAHbv3k2/fv3o3Lkz/fv35+uvv3bfc+zYsfTu3Zuzzz7bfX8R8TZ+YHtSHSlebamOFMYPbB+V+9erl51u48ZBQUF079mlCzzxRLWn3XjjjUyfPp3BgwezefNmRo8ezfvvvw/AlClT6Nq1K3l5eaxcuZKRI0dSUFDAtGnTuPjii3nggQdYsmQJs2fPBmDbtm0sWLCAtWvX4nA4uPPOO5k3bx4jR44MqcuLFy+mU6dOAOzcuZMXX3yRXr16sXz5cnbu3MlHH32EtZYhQ4awevVqmjRpwssvv0xBQQGlpaV069aN7t27e93z5MmTDBs2jAULFtCjRw+OHDlCWloa06dPJz8/n6eeegqAF154wX3N3Xffzc0338zNN9/MnDlzGDt2LHl5eQB8++23rFmzhu3btzNkyBCGDh0a0ncTSSauF5pJUbVSH3Tu3Jndu3czf/58rrzySq/P1qxZw8KFCwHo168fBw8e5MiRI6xevZrXX38dgKuuuorTTjsNgHfeeYcNGzbQo0cPAIqLi2nRokW1fejbty8pKSl07tyZBx98kKKiItq2bUuvXr0AWL58OcuXL6dr164AHDt2jJ07d3L06FGuvfZa0tLSAGd6xNeOHTs488wz3X1q1qxZtf354IMP3N9vxIgRTJgwwf1ZTk4ODRo0oEOHDuzbt6/ae4kkq5yumVEL3L7qZyAPYeRcm4YMGcJ9993Hu+++y8GDByO+j7WWm2++mRkzZoR13apVq2jevLn7uKioiCZNmnjdd/Lkydx2221e1z0Rg39vjRo1cv9ZG3mLxIZy5H6MHj2aKVOmuNMaLpdccgnz5s0D4N1336V58+Y0a9aMSy+9lJdeegmApUuXcujQIQD69+/Pa6+9xv79+wFnjv2rr/yuQhmWgQMHMmfOHI4dOwZAYWEh+/fv59JLLyUvL4/i4mKOHj3KokWLqlzbvn17vv32W9avXw/A0aNHKS0tpWnTphw9etTv83r37s3LL78MwLx587jkkktq/B1EJHrq54g8xlq3bs3YsWOrtE+dOpXRo0fTuXNn0tLSePHFFwFn7nz48OF07NiR3r1706ZNGwA6dOjAgw8+yIABAygvL8fhcPD000/Ttm3bGvVvwIABbNu2jV/84hcA/OxnP2Pu3Ll069aNYcOGceGFF9KiRQt3+qSyhg0bsmDBAu6++26Ki4tJTU1lxYoV9O3bl9zcXLp06cLkyZO9rnnyyScZNWoUM2fOJCMjg+eff75G/ReR6DKx+HU4Ozvb+m4ssW3bNs4///w674vUPv23laRnLQwfDgsWwKZN0LlzRLcxxmyw1mb7tmtELiJSmxYtgsqFBxlVdmqrMQVyEZHa8P333kH7/POdZdUNG0b9UfXqZaeqHhKP/ptK0rEWbrnFO4hv3gxbt9ZKEId6FMgbN27MwYMH9Rc/gbjWI2/cuHGsuyJSN5YvhwYNoKIQgocecgZ2nwq4aKs3qZXWrVuzd+9erV2dYFw7BIkktEOH4PTTPcdt28L27VBHg5h6E8gdDod2kRGR+HPHHfDss57jDRugW7c67UK9Sa2IiMSVVavAGE8QnzLFmUap4yAO9WhELiISFw4fhhYt4ORJ53HLlrBrF1SscRQLGpGLiITqD3+A9HRPEF+3Dr77LqZBHBTIRUSqt3atM40ya5bzeMIEZxrlooti268KUUutGGNSgHyg0Fob3t5oIiLVqM1d6AM6dgxat3amUwCaNYO9e6Fp09p9bpiiOSL/PbAtivcTEQE8u9AXFhVj8exCn7exsPYeev/9zoDtCuLvv+/8cz0L4hClQG6MaQ1cBfwjGvcTEakskl3o8zYW0id3Je0mLaFP7srQg/5HHznTKK59BMaOdaZRLr440u7XumilVp4AJgABf1QZY8YAYwD3Mq8iIqEIdxd61wjeFfxdI3ggcDrm+HE45xz49lvnscMBBw7AqafWrPN1oMYjcmPMYGC/tXZDsPOstc9Za7OttdkZtbD6l4gkrnB3oQ97BD99OjRp4gni77zjrEyJgyAO0Umt9AGGGGN2Ay8D/Ywxc6NwXxERIPxd6EMewW/c6EyjTJniPB4zxplG6devxn2uSzVOrVhrJwOTAYwxlwH3WWt/W9P7ioi4hLsLfav0VAr9BHP3CP7ECejQAb780vPhwYPe66XEEc3sFJG4EM4u9OMHtvfKkUOlEfwjj8DEiZ6Tly6FQYOi3d06FdVAbq19F3g3mvcUEXEJtZbc3wh+WGoROd08K3F+fdX1tFn0qjO1Euc0IheRuBBuJYp7BH/yJEfO70SzXZ+5P+t29zyKTz2dGQXf1P6kojqgKfoiEhciqSXnL3+BRo3cQfzW6/6TrImL+SHt1OqvjSMakYtIXAirlnzHDjjvPPfhW+37cOc1k6qkUQLdM95oRC4icSGkWvLSUsjO9grifPMNf75lut9ceKB7xhsFchGJC9XWkj/7rHM25oaKuYmvvOKsCT/zzLDr0OONUisiEhcC1pI3O+E92h48GN5806st3Dr0eGNisWt9dna2zc/Pr/Pnikj9k7exkGmLtnDoeAkA6akOpg7pWH2QLSuDyy6DNWs8bXv2OJedTVDGmA3W2mzfdqVWRCRm8jYWMv61Te4gDlBUXML4VzcFX63w+efhlFM8QXzuXGcaJYGDeDBKrYhIzMxctoOSsqpZgZJyy8xlO6qOyr/6CrKyPMeXXw7LlkED/2PSmGxGEQMK5CISM8HK/7w+Ky+HgQNhxQpP25dfegd1HxEtZRunlFoRkZgJVv7n/uyllyAlxRPE//EPZxolSBCHCCcQxSmNyEUkZsYPbM/41zZVSa84Ghj+s2sz72qU3r1h9WpnUA9BuJtRxDONyEUkZnK6ZjJz6IWcluZwt6U3PoXVa59g0KAenhN37nTuZB9iEIfwN6OIZxqRi0hMeS1Pu3AhDB3q+fBvf4M77ojovkGXsk0wCuQiEnVhV4t89x2ceabnuFs3WLfOOVMzQok+CagyBXIRqRHfoN33vAwWbigMrVrEWrjxRud0epetW+H884M+I9SAHM5mFPFMOXIRiZirxK+wqBiLM2jPW/d1aNUiixY5679dQfzxx52B3U8Q933G5Nc/CT5hKMloRC4iEfNX4hdo0Q93tciBA9CiheeD88+HggJo2DDkZ7h+MCTDaDsUGpGLSMTCKeVrdWpjuOUW7yC+ebMzlRIgiAd7RiKWEUZKgVxEIhaolM935e/+Xxew9v7L4cUXnQ0PPeRMo3TqFPEzErGMMFIK5CISsUDrfN/Uqw2Z6amkFx9l98ODmT3/T84Ps7KguBgmT67xMxKxjDBSypGLSMSClvjdfjv8/e+ekzdscJYVRvMZAmg9cpF6LS5X71u1Cvr18xxPmQJTp8asO4kk0HrkGpGL1FNxt3rf4cPOF5knTzqPW7aEXbsgLS22/UoCypGL1FNxtXrfuHGQnu4J4uvWOWdrKojXCQVykXoqLsru1qxxrlD4l784jydOdFajXHRRbPuVZJRaEamnWqWnUugnaNeLsrtjx5zbqh0+7Dxu1gz27oWmTWPbryRV4xG5MeYsY8wqY8xWY8wWY8zvo9ExkWRXb8vu7r/fGbBdQfz9951/VhCPmWiMyEuBe621HxtjmgIbjDFvW2u3RuHeIkmr3pXdffSRd8pk7FhPSkViqsaB3Fr7LfBtxZ+PGmO2AZmAArlIDdWL1fuOH4ezz4Z9+5zHDodzvZRTT43odnFZUlnPRfVlpzEmC+gKfBjN+4pIjEybBk2aeIL4O+84K1NqEMS1kmH0RS2QG2N+BiwExllrj/j5fIwxJt8Yk3/gwIFoPVZEasPGjc5qFNdEnjFjnNUolSf6RCCuSirjSFSqVowxDpxBfJ619nV/51hrnwOeA+fMzmg8V0Si7MQJ6NABvvzS03bwIJx+elRuHxcllXEoGlUrBpgNbLPWPl7zLolITDz8MKSmeoL40qXOUXiUgjhoJcPaEo3USh9gBNDPGFNQ8c+VUbiviNSFTz5xplEmTXIejxwJ5eUwaFDUH1VvSyrjXDSqVtZQdflhEanvTp6Erl2dGzu47N8PGRm19sh6V1KZIDSzUyQZPfEE3HOP5/iNN2DIkDp5dL0oqUwwCuQicSjiWuzt2703Nx461Ln5sdEv1fFM65GLxBnf5W3Bmdu0QGagoF5SAr16wccfu5uWLv+YBzccUoojjgRaj1yrH4rEmWA71xcWFTNuQQFdpy/3TLJ59lnn5sauIP7KK+R9vJc/rN6niTkJQoFcpB7I21hIn9yVtJu0hD65K4MGVH8rIvo6dLyEZ/6xzJkyueMOZ+Pgwc5qlBtu0MScBKMcuUiMhbsTUIoxlAVJiTYoL+OVlyaRXbjN07hnj3PZ2QqamJNYNCIXibFwR8fBgvgNm99m18xr3EF83OB7nZN6KgVx0MScRKMRuUgURVJNEu7o+LQ0B4eOl3i1ZR7ez9pnR7uP32/bhZHDptPqtCZ+7zF+YPsqL0w1MSd+KZCLREmkmyWHuxNQ5QG5seX8c8F/cvFXm9xtF98+m72ntgwamDUxJ7EokItESbAUSbAA2fe8DOau+9pvuz+Hi52j8SFb3+Wvix51t4+/Yiwrel1F0fGSwGWIlWhiTuJQIBeJkkhfIK7a7n9Z50DtF5pj5OXe6D5en9mBYb+ZQXmDFFJLypk1rIsCdJJRIBeJkkg3Sw72A8Ar535qYxYuzSXv/RXuc3455jm+Oq2V+ziU3wAk8SiQi0RJ3/MymLfuayrXlATLU7uCdKAalFNTHe6c+xXb1/DMG7nuzwomP0ROeWe/16mEMPkokItEQd7GQhZuKPQKyga4vrv/PLS/afa+iopLyDh2iG1Pj3C3fdLy5/zH3X9j9R8HkJm7MqLfACTxKJCLREGgafPzP9zDvHVfV6kK8Xe+98WWp954mME71rib+t/6DF+ccRbmqPNlp0oIxUWBXCQKAqUzXJN3fEsRg6U/+n/+IbMX/pf7eHq//8ecHte4j10jbpUQiosCuUgUBHrRWVnlF5H+zj/9+GE+fvIm9/HOM87iylF/pSTF4W7zHXGrhFBAU/RFosLfFmb+uEbiXudby2OLH/MK4gNHP8Wvbn3GK4gDNHbor6xUpRG5SBT4pjkaBFjYyjct8t6Tc5n1/CT354/3vYW/9hwa8DmHjpeENFtUkosCuUiUVE5z+KtK8UqL/PADOd1ak+P6MCsLtm3j7G0HyVy2g8Ki4oCrHKpWXHwpkIuEKZSFsYK+iLz9dvj73z0nb9gA3bq5r6t8r3aTlvitM1etuFSmQC5SIZQAHc7CWFVeRK5cCabScrJTpsDUqUH7FOlsUUkuCuQihB6gAy2MNW5BAfe+sokya6suWHX4MGRkOPfNBGjZEnbtgrS0avulWnEJhV6BixD65g7BUhq+NeN5Gwth3DhIT/cE8XXr4LvvQgri4PwhMuO6TmSmp2Jwbq4847pOyo+LF43IRQh95cJQ6sUBOn65mZxuV7iPPxv1H4w69zq++df3tFq1MqyJO6oVl+ookIsQei56/MD2jH9tEyVl/pe6SjtZzLqnb6bZyeMAHG2Uxqzn32H+1iKKK+5f3YYTkewyJMlNqRUR/E/oCZiLDrBc4cR3X2DrrBvcQXzoTQ/TadwrvLj5h5D35HTl6guLirH4pGlEAtCIXITQ1y2ZuWwHJeXekbzLNzvI++e97uPnu1/NtMtvcx8H2izZXzon0l2GJLlFJZAbYwYBfwFSgH9Ya3OruUSk3nHlol2pjXsWFDBz2Q53QM/bWOiVfmlccoL3/34rGT8WAfBTyilk3z2Po428NzwONLHHXwlhpLsMSXKrcSA3xqQATwO/AvYC640xb1prt9b03iJ1KW9jIdMWbfHaod6V2sj/6gcWbvCkN8atmce4tfPdx3ff+ij977yR0tc/AZ9Sweu7Z7JwQ2FIJYSqG5dIRCNH3hP43Fq7y1p7EngZuKaaa0TqFVduunIQdykuKWP+h3soLimj474v2P3wYHcQf+nCQZz/p6X0v/PGgKWCD+Z0CrmEMKxcvUiFaKRWMoE9lY73Ahf5nmSMGQOMAWjTpk0UHisSHXkbC92TeQI5peQn3v3HHZx1eJ+77cKx8zmc2pQnKgXlQKWCoZYQao1xiUSdvey01j4HPAeQnZ0d+G+MSB36U94nVfbZ9HXHuleZ+N6L7uObb5jGe2d3B5yj63CDbHXlhaobl3BFI5AXAmdVOm5d0SZSr+VtLAwaxNsf2M2yOXd5zu/Un3FXjANjAOeenH3Pywj7maGu1SISqmgE8vXAvxtj2uEM4DcCv4nCfUVqdXJMoB3sHWUlLJ1zN+f8sNfTuH8/+Wu/w1QK/BZYuKGQ7Lanh9wnlRdKbajxy05rbSlwF7AM2Aa8Yq3dUtP7itT25Bh/JX2j17/Bzkev9QTxN94AayEjg1XbD1QJ/IEm9oTzzGDtIqGISo7cWvsW8FY07iXiUtuj18qlftl7t/DavInuzwp/NZjMZW+60yhQfRAO5bcHlRdKbdDMTqm3ggXOQEEznFRM3/MyeHXNF+x47Fqv9tw5K5k0qi/gHZyDbd8Wau471GVptd6KhMPYICVXtSU7O9vm5+fX+XMlvvTJXRlwpUGD95InwSbe+KvZzttYSNmo0Vy/abm7bVaf33Dwvkk8mNPJfY5v0PXluv/Miu3ZfGWmp7J2Ur8qzw4WpANtE6fla8UYs8Fam+3brhG51Fv+Rq8u/nLV8z/cU2XE7DcVs349OT17ep3XbsKbWNOAzO0H3G3+UjvgnHJfbq1XEL5nQYHf7+Dvt4rqygv1QlTCpUAu9VblyTGhrAFe7eJUJSXQsKHXZ5f/7m983rxN1XMJnNopt5Yvc6/yaotm7lsvRCVcWsZW6rWcrpmsndQPU/2ppBj/Z7VKT4WxY72C+D9/eSNZExd7BXH3uX7+HOgcl2hOrQ/nuSKgQC5xoroglupIYfhFZ1UJph327WLt5P7w5JOextJSms56tNrA6y84Axw/WVqlBDKaW7JpvRUJl1IrEhf85ctdLzwrb3ac3fZ0pr65hSM/nmDXTO+124bd9jeG3zaEnJSUkNY0cf156ptbKCr2LKZ16HiJ34qUaE2t13orEi5VrUidi7S0LtTr/nnZcEa897L7uPJGD5FUfwSqnvFXkSJSm1S1IvVCpGuNhBTEt26Fjh0ZUanpnPvyKE3x/G8eSfWHXj5KfadALnUqktK6aoN/eTmkeOeUrxnxGJta+c8phxuANRtT6ju97JQ6FcnodtqiLYE3L54+3TuIjxpF3sd7+axth4D3CzcAB3r52Pe8DPrkrqTdpCX0yV2pDZIlZjQilzoV7ug2b2Oh31172h76hvceHuPdeOIENGpETsWh77Zt4HxBWlhUTJ/clSHn5v29fOx7XobXLFItRyuxpJedUqfCnX7eZdpyr4oRrGX3I1d7nbN69utcOvpa/HHl1guLiv1O64+0RFAvQCUW9LJT6oVwS+sqB/ExHy7k/nefdx8vPu8S7rpmIo4vDE2mLedwcUmV+7lKAv0FXn+5+VArY/QCVOoTBXKpc8HqrSsH0lNTHQC0PryPNc/+zuu89n9YyE+ORgCUlFl3wA+U4ggl8IZTUaMXoFKfKJBLVERSG+57jW/euej4ST594tf87KQnYP5m2IP8X1aXoPf1N9IOJfCGU1ET6nK0InVBVStSY5Hs5OPvmnnrvnYHxpEbFrH7kavdQfydn/cga+LiaoO4i+8IPJRp7+GkS6I5JV+kpjQilxqLpDbc3zUWaHn0ez782y1e7R3ueZXjDcNLWfimOELJzYebLtFu91JfKJBLjQUaybrK/PwFTn/XfPj0SFoe+8F9PGroFFb9vIfXOZkBgm1lgVIc1QVepUskXim1IjUW7AVfoHRL5Wt+vWk5ux8e7A7i6866gKyJi6sEcVdQzQzyvJqkOJQukXilEbnUWLCdfCqrnG4ZP7A9j/7PatY88RuvczqNW8DRRk2qXJvpM6Kvra3QlC6ReKRALhHxrTi5vnum363WfLlSKjk39iPns8/c7bfl3M+y9r39XuM7yUbLvIp4UyCXsPmrt164obDaIA5w8+7/AzPYfbzlzH8nZ9QTlJQFvjaUSTb5X/2gwC5JS4FcwhaoSiXFmIDBPL34CAV/9U6jdBn7EkWpzXBYOC3N4XdNFaiag/f3g2Tuuq/dn2vdE0k2etkpfuVtLAy4sl+gEXKZtTgaVN03880Xx3kF8bFX30fWxMUUpTYDoKTcktbwFJ4Y1iWkLc4C7W5fmXt1RJEkoBG5VFHdVPVA9dbpqQ5+PFnqPh644//4e95DnhPOPZd21z6OvzH7N0XFIee+Q13PROueSLJQIE9y/qbWVzfBJ9D+mSVl5ZSUWZr+9COfPDHM+0HffQctW9IqwKqBrvRJKFUjgX6QBLqnSKKrUWrFGDPTGLPdGLPZGPMvY0x6tDomtS/Q1PpAQdJdcdI1k+u7Z1I5iWKBH0+W8fJLk7yC+PgrxtJu4mJo2dJ5HIUd4gPtbl+Te4rEs5rmyN8GLrDWdgY+AybXvEtSV4K9tPSn8gh31fYDXimSy75Yz+6HB9Nrz6cAfPez08mauJhXOw+ggTHuHHs0Jt34u8dve7XRRB5JWlHbWMIYcy0w1Fp7U3XnamOJ+qHdpCV+89XgHNFWDvKOFEOThqe41/x2jdrTThazddYNXtdedOcL7GvavMr9FFxFaibQxhLRrFoZDSyN4v2klgXKIbtGtK4R7mlpDrDOTR5cKRgDzH5tmlcQf+Dy2+gydRnfN8uock9VkYjUnmoDuTFmhTHmUz//XFPpnD8CpcC8IPcZY4zJN8bkHzhwIDq9lxoJlq/O6ZrJ2kn9+DL3KtIankJJuWfs3md3AV8+PJj+X6wH4EjDNLImLOLVXjlMHdKR8gC/5amKRKR2VFu1Yq29PNjnxphbgMFAfxskT2OtfQ54DpyplfC6KbUh3HK/RiU/sePx670+u27CS2w0zTgtzYG1cM+CgoDPc+34IyLRVdOqlUHABGCItfZ4dLokdSXUXX1apafy1BsPewXxhy4bRZ8Z7/D6w8OZNawLJ0rK3amXQD+lA7xDFZEaqmkd+VNAI+Bt4/xbus5ae3uNeyW1LuT9Kd9/n7WT+7sPyzGcPeFNUhuewoyK8r5QZloCAafgi0jN1CiQW2vPiVZHpG5Vu6vPTz9B48Zen//63v9h/Smnu5eUBfzuTh9IoLJGEakZrbWSpILuT/m733kH8WnTwFpeeXQEs4Y598wct6CAexYUhBzEgZBWRxSR8GmKfgIJlvP2/Szdz2qDXb7ZQd4/7/W+aVkZNGjgvkfldEy4YTnYzj4iEjkF8gQRLOcNVPnM0cDgSDGUlFlOKSvl80dzvG+4ZQt06ODVFGou3B8DmjIvUksUyBNEsJz3jz+VVvmspNySnurgT8ueYegHeZ4PJk6E3Fy/zwilDjw91cFPpeVVFtS6qVcbzeoUqSUK5Aki2E72/nTYt4u3Xhjr3VhaCimBF6M6NdVBUXHgypNURwpTh3QEtA2bSF1SII8T1dV8+8t5AzQwUGlSJg3Ky9g18xrvkzZuhC5dqn12sCDuuzmyArdI3VEgjwOh1HwHKgipHMQnvfs8t3+40H38xY2j+Pn8OWE92x8DXpsji0jdUiCPA9XWfAOHg4yWz/n+a1bMvtOrrccDS1g/7UqvtlA3mfDlb09NpVZE6o4CeRwIWvNdwd+uOcaW8+UjQ7zarhnxGJ+17cCMnE5e7YFG/dUFcd8NHEKeMSoiUaMJQXEg0HKz6WmeRah8VzL8/ZqXvIL4ouxBtJu4mO87dPG7Lni4m0yAMy9+ffdMZi7b4d6kedqiLQF/exCR2qEReRwYP7A941/bREmZdyL82IlS8jYWeu1zOX/eOyx47Gav86HSagMAAAwwSURBVNrf+y+aN2/GrCApjkCj/jJrq2wy4dokAqrWpweiJWxFao8CeZhikf/N6ZrJ1De3VKkaKSm3njy5teR0a03laT03jZzJ2jPPB6pOEPL9DoE2NM6slCv3/c59cleGPEFIGyGL1B4F8jDEMv8b6GXmN0XFMHMmTJjgblvR+TJuveK+KucWl5QxbdEWTpSUV/kO13fPZOGGwiojb1fQ9vf9Qh1layNkkdqlQB6GUKpHaou/EXPrw/tY8+zvvNq6TMyjKMh/Vn+15sUlZazafoAZ13UK67eNQKP49FQHTRqdoqoVkTqiQB6GUKpHosU3hdP3vAzPiNlats4aSlrJT54LVqygz3pDUYR9+aaoOODIO5DxA9tXqWxxze5U4BapO6paCUOgPG+087+uFE5hUbF7s+OFGwq5vnsmv9/6v+x+5GpPEL/6audsoP79q/2BkupIIT3AdmuRfIecrplemzS7Nm1WEBepWxqRhyHQCDTa+V9/KZxmP+zjwWuv8Gr7xeQ8TjROpWjSElqlpwZdC6XyZhDR/A7hjuJFJPoUyMMQ6mbFoQpUAeM7sv7oqRG0+PGQ+/iWoVN49+c9oByoyHkXFhXjSDE4GhivHe9dpYL+6saVwxZJDCbIxve1Jjs72+bn59f5c+sTf2uYuILuzGU7KCwq5teblvPI//7V/fm6Np24cfiMoPdNczTgtCaNFKRFEpAxZoO1Ntu3XSPyOlR5BN7AmCpbn7kqYP6UfTpX/Kqr12c97nuNAynee2j6c7yknIcUvEWSigJ5lAVKl/iOwAPtXzn3kRG0O/SN+/j2nPv55KL+/HFge+59ZVNI+17WRTmkiNQfCuRRFGzCUHWrCF6zZRV/WfyYpyE7G9av59lK54xbUBBSPzQdXiS5KJBHUbAJQ4GCa3rxEQr++hvvxoMH4fTTvZryNhZiCG3DY02HF0kuqiOPomAThvwF10Uv/N4riOf/+UlnTbhPEAfnD4lQgrimw4skHwXyKAo2YajyMrODdqxl98OD6bTvC+cJ7duDtWTff1fAewdLl2hCjkhyU2olioJNGMrpmonj6GGu+mVH74v27YMWLaq9d7DVCbXNmkhyUyCvId8qleu7Z7Jq+4Gqddy//CVXrV7tuXD2bPK6DmTmnE9Dqvmuq1mlIhJ/FMgj4ArehUXFXi8gXWuieKU3liyBbq3d1xZntOTyP7xE4WfF8JmnCqWwqJjxr24C/C+JG+1ZpSKSOKIys9MYcy/wKJBhrf2+uvNjNbMzkk0hgq5CGEBmeipr7+oJTZt6tf/vsnzuef9A0GvTUx0UTBkQ3hcTkaQQaGZnjV92GmPOAgYAX9f0XrXJ34qCk1//hLyNhUGvGf/qJq9r5q77utpdcR787wneQfypp8Ba/uvjw9VeG2jRKxGRQKKRWpkFTADeiMK9ak0om0L4jr5/+PEnrwWoqtNndwHzFvzJ09CsGRQVQcUGxpqoIyK1oUaB3BhzDVBord1kguy2XnHuGGAMQJs2bWry2IhUtymEv1mZoWpccoLtjw/1bty9G9q29WoKVHlS2Wlp/tcLFxEJpNrUijFmhTHmUz//XAPcDzwQyoOstc9Za7OttdkZGRk17XfYqtsUorop9IE8lZfrFcQ/Hfcn56QenyAOeNWS++NIMUy5umPAz0VE/Kl2RG6tvdxfuzGmE9AOcI3GWwMfG2N6Wmu/i2ovo6C68r1w0x499nzKqy9N8jQYA2VlXBDkNxPfypNTUx0YA0XHS1SFIiIRizi1Yq39BHDPZDHG7AayQ6laiYXqyvdCSXsAnFJWyo7HriPFlnsad+6Ec84JuR+RBOtIKm5EJDkkVR15sCAaaMR+ffdMlmz+lkPHS/jtx0t48O1n3J9vu/1ezn/m0Vrvd7BVFRXMRSRqgdxamxWte8VCsBH7gx0bw7nnus9dc/4v+H7uK+RUmuhTm0KpuBGR5JVUI/LqVBmxl5ZC797wwQeetj17uLh13QRwl+oqbkQkuSXd6od5Gwvpk7uSdpOW0Cd3ZeAJQbNng8PhCeJz5zqrUeo4iEP1FTciktySKpCHNLvzyy+dFSi33uo8/tWvoKwMbropJn0G/2WLWjBLRFySKpAHyzVTXg79+8PZZ3s+3L0bli+HBrH915TTNZMZ13XSuuMi4ldS5cgD5ZR7rn0LUvp7GubMgVGj6qhXoYm0bFFEEl9SBXLfWvEzjxzgg2cqBew+feC99yAl8OxLEZH6JqlSK65cs7HlzHl1qncQ//xzWLNGQVxE4k5SBfKcrpm82PQrvnxkCP12OddDL7h/hrMa5ec/j3HvREQikzyplW+/hVat6Ok67t4dPviALg6tNigi8S3xR+TWwtCh0KqVp23bNsjPd9aJi4jEucQO5G+84SwdXLjQeTxrljOwn3debPslIhJFiZlaOXAAWrTwHHfoABs3QsOGseuTiEgtSawRubUwYoR3EP/kE9iyRUFcRBJW4gTypUudaZS5c53HubnOwH7BBbHtl4hILYv/1MrBg9C8uee4XTvYuhUaN45dn0RE6lB8j8jHjPEO4h9/DLt2KYiLSFKJz0C+cqVzhcL//m/n8dSpzjRK164x7ZaISCzEV2rlyBE44wznhg8A//Zv8MUXkJYW236JiMRQfI3IL7zQE8TXrXPO1lQQF5EkF18j8nfegQ8/hOHDY90TEZF6I74C+dlne2/8ICIi8RPI8zYW+t3hXkQk2cVFIHfttenaps211yagYC4iSS8uXnYG3WtTRCTJxUUgD7TXZqB2EZFkEheBvFV6aljtIiLJJC4CuWuvzcpSHSmMH9g+Rj0SEak/ahzIjTF3G2O2G2O2GGMeiUanfOV0zWTGdZ3ITE/FAJnpqcy4rpNedIqIUMOqFWNMX+Aa4EJr7U/GmBbVXROpnK6ZCtwiIn7UdER+B5Brrf0JwFq7v+ZdEhGRcNQ0kJ8LXGKM+dAY854xpkegE40xY4wx+caY/AMHDtTwsSIi4lJtasUYswL4Nz8f/bHi+tOBXkAP4BVjzNnWWut7srX2OeA5gOzs7Cqfi4hIZKoN5NbaywN9Zoy5A3i9InB/ZIwpB5oDGnKLiNSRmqZW8oC+AMaYc4GGwPc17ZSIiITO+MmChH6xMQ2BOUAX4CRwn7V2ZQjXHQC+ivjB4WlOcv1wSbbvC8n3nfV9E1+g79zWWpvh21ijQB4PjDH51trsWPejriTb94Xk+876vokv3O8cFzM7RUQkMAVyEZE4lwyB/LlYd6COJdv3heT7zvq+iS+s75zwOXIRkUSXDCNyEZGEpkAuIhLnEj6QG2NmViyzu9kY8y9jTHqs+1TbjDE3VCwrXG6MSdiyLWPMIGPMDmPM58aYSbHuT20zxswxxuw3xnwa677UBWPMWcaYVcaYrRX/P/8+1n2qTcaYxsaYj4wxmyq+77RQr034QA68DVxgre0MfAZMjnF/6sKnwHXA6lh3pLYYY1KAp4ErgA7AcGNMh9j2qta9AAyKdSfqUClwr7W2A871nP4jwf8b/wT0s9ZeiHOS5SBjTK9QLkz4QG6tXW6tLa04XAe0jmV/6oK1dpu1NtF3pu4JfG6t3WWtPQm8jHNt/IRlrV0N/BDrftQVa+231tqPK/58FNgGJOymBNbpWMWho+KfkKpREj6Q+xgNLI11JyQqMoE9lY73ksB/yZOdMSYL6Ap8GNue1C5jTIoxpgDYD7xtrQ3p+9Zoh6D6IthSu9baNyrO+SPOX9Xm1WXfakso31kkERhjfgYsBMZZa4/Euj+1yVpbBnSpeJf3L2PMBdbaat+JJEQgD7bULoAx5hZgMNDf31rp8ai675wECoGzKh23rmiTBGKMceAM4vOsta/Huj91xVpbZIxZhfOdSLWBPOFTK8aYQcAEYIi19nis+yNRsx74d2NMu4pVOG8E3oxxnySKjDEGmA1ss9Y+Huv+1DZjTIarqs4Ykwr8CtgeyrUJH8iBp4CmwNvGmAJjzLOx7lBtM8Zca4zZC/wCWGKMWRbrPkVbxQvsu4BlOF+CvWKt3RLbXtUuY8x84AOgvTFmrzHmd7HuUy3rA4wA+lX83S0wxlwZ607VojOBVcaYzTgHKm9baxeHcqGm6IuIxLlkGJGLiCQ0BXIRkTinQC4iEucUyEVE4pwCuYhInFMgFxGJcwrkIiJx7v8DFIKP9D3NNnoAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light", "tags": [] }, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "plt.scatter(xs, ys)\n", "plt.plot(xs, params.weight * xs + params.bias, c='red', label='Model Prediction')\n", "plt.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": { "id": "4wFJcqbhbn81" }, "source": [ "## Aside: hosts and devices in JAX\n", "\n", "When running on TPU, the idea of a 'host' becomes important. A host is the CPU that manages several devices. A single host can only manage so many devices (usually 8), so when running very large parallel programs, multiple hosts are needed, and some finesse is required to manage them." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "id": "3DO8NwW5hurX", "outputId": "6df0bdd7-fee2-4805-9bfe-38e41bdaeb50" }, "outputs": [ { "data": { "text/plain": [ "[TpuDevice(id=0, host_id=0, coords=(0,0,0), core_on_chip=0),\n", " TpuDevice(id=1, host_id=0, coords=(0,0,0), core_on_chip=1),\n", " TpuDevice(id=2, host_id=0, coords=(1,0,0), core_on_chip=0),\n", " TpuDevice(id=3, host_id=0, coords=(1,0,0), core_on_chip=1),\n", " TpuDevice(id=4, host_id=0, coords=(0,1,0), core_on_chip=0),\n", " TpuDevice(id=5, host_id=0, coords=(0,1,0), core_on_chip=1),\n", " TpuDevice(id=6, host_id=0, coords=(1,1,0), core_on_chip=0),\n", " TpuDevice(id=7, host_id=0, coords=(1,1,0), core_on_chip=1)]" ] }, "execution_count": 24, "metadata": { "tags": [] }, "output_type": "execute_result" } ], "source": [ "jax.devices()" ] }, { "cell_type": "markdown", "metadata": { "id": "sJwayfCoy15a" }, "source": [ "When running on CPU you can always emulate an arbitrary number of devices with a nifty `--xla_force_host_platform_device_count` XLA flag, e.g. by executing the following before importing JAX:\n", "```python\n", "import os\n", "os.environ['XLA_FLAGS'] = '--xla_force_host_platform_device_count=8'\n", "jax.devices()\n", "```\n", "```\n", "[CpuDevice(id=0),\n", " CpuDevice(id=1),\n", " CpuDevice(id=2),\n", " CpuDevice(id=3),\n", " CpuDevice(id=4),\n", " CpuDevice(id=5),\n", " CpuDevice(id=6),\n", " CpuDevice(id=7)]\n", "```\n", "This is especially useful for debugging and testing locally or even for prototyping in Colab since a CPU runtime is faster to (re-)start." ] } ], "metadata": { "accelerator": "TPU", "colab": { "name": "JAX Parallelism", "provenance": [] }, "jupytext": { "formats": "ipynb,md:myst" }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }