Skip to content

Commit 0d61eb5

Browse files
committed
Fix mdash
1 parent 855cccc commit 0d61eb5

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

docs/examples/parallel-computing-with-dask.ipynb

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -129,8 +129,8 @@
129129
"> on datasets that don’t fit into memory. Currently, Dask is an entirely optional feature\n",
130130
"> for xarray. However, the benefits of using Dask are sufficiently strong that Dask may\n",
131131
"> become a required dependency in a future version of xarray.\n",
132-
">\n",
133-
"> &mdash; <cite>https://docs.xarray.dev/en/stable/use\n",
132+
"\n",
133+
"&mdash; <cite>https://docs.xarray.dev/en/stable/use\n",
134134
"\n",
135135
"**Which Xarray features support Dask?**\n",
136136
"\n",
@@ -139,16 +139,16 @@
139139
"> Dask arrays. When you load data as a Dask array in an xarray data structure, almost\n",
140140
"> all xarray operations will keep it as a Dask array; when this is not possible, they\n",
141141
"> will raise an exception rather than unexpectedly loading data into memory.\n",
142-
">\n",
143-
"> &mdash; <cite>https://docs.xarray.dev/en/stable/user-guide/dask.html#using-dask-with-xarray</cite>\n",
142+
"\n",
143+
"&mdash; <cite>https://docs.xarray.dev/en/stable/user-guide/dask.html#using-dask-with-xarray</cite>\n",
144144
"\n",
145145
"**What is the default Dask behavior for distributing work on compute hardware?**\n",
146146
"\n",
147147
"> By default, dask uses its multi-threaded scheduler, which distributes work across\n",
148148
"> multiple cores and allows for processing some datasets that do not fit into memory.\n",
149149
"> For running across a cluster, [setup the distributed scheduler](https://docs.dask.org/en/latest/setup.html).\n",
150-
">\n",
151-
"> &mdash; <cite>https://docs.xarray.dev/en/stable/user-guide/dask.html#using-dask-with-xarray</cite>\n",
150+
"\n",
151+
"&mdash; <cite>https://docs.xarray.dev/en/stable/user-guide/dask.html#using-dask-with-xarray</cite>\n",
152152
"\n",
153153
"**How do I use Dask arrays in an `xarray.Dataset`?**\n",
154154
"\n",
@@ -161,8 +161,8 @@
161161
"> `open_mfdataset()` called without `chunks` argument will return dask arrays with\n",
162162
"> chunk sizes equal to the individual files. Re-chunking the dataset after creation\n",
163163
"> with `ds.chunk()` will lead to an ineffective use of memory and is not recommended.\n",
164-
">\n",
165-
"> &mdash; <cite>https://docs.xarray.dev/en/stable/user-guide/dask.html#reading-and-writing-data</cite>\n"
164+
"\n",
165+
"&mdash; <cite>https://docs.xarray.dev/en/stable/user-guide/dask.html#reading-and-writing-data</cite>\n"
166166
]
167167
},
168168
{
@@ -196,8 +196,8 @@
196196
"> - Array data formats are often chunked as well. When loading or saving data,\n",
197197
"> if is useful to have Dask array chunks that are aligned with the chunking\n",
198198
"> of your storage, often an even multiple times larger in each direction\n",
199-
">\n",
200-
"> &mdash; <cite>https://docs.dask.org/en/latest/array-chunks.html</cite>\n"
199+
"\n",
200+
"&mdash; <cite>https://docs.dask.org/en/latest/array-chunks.html</cite>\n"
201201
]
202202
},
203203
{
@@ -267,8 +267,8 @@
267267
">\n",
268268
"> - **Single-machine scheduler**: This scheduler provides basic features on a local process or thread pool. This scheduler was made first and is the default. It is simple and cheap to use, although it can only be used on a single machine and does not scale\n",
269269
"> - **Distributed scheduler**: This scheduler is more sophisticated, offers more features, but also requires a bit more effort to set up. It can run locally or distributed across a cluster\n",
270-
">\n",
271-
"> &mdash; <cite>https://docs.dask.org/en/stable/scheduling.html</cite>\n"
270+
"\n",
271+
"&mdash; <cite>https://docs.dask.org/en/stable/scheduling.html</cite>\n"
272272
]
273273
},
274274
{

0 commit comments

Comments
 (0)