Skip to content

Commit 421358b

Browse files
authored
Merge pull request #201 from zzhengnan/fix-typos
Typo fixes and minor editorial changes
2 parents 121947a + 830b87c commit 421358b

7 files changed

+62
-64
lines changed

01_dask.delayed.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
"cell_type": "markdown",
2929
"metadata": {},
3030
"source": [
31-
"As well see in the [distributed scheduler notebook](05_distributed.ipynb), Dask has several ways of executing code in parallel. We'll use the distributed scheduler by creating a `dask.distributed.Client`. For now, this will provide us with some nice diagnostics. We'll talk about schedulers in depth later."
31+
"As we'll see in the [distributed scheduler notebook](05_distributed.ipynb), Dask has several ways of executing code in parallel. We'll use the distributed scheduler by creating a `dask.distributed.Client`. For now, this will provide us with some nice diagnostics. We'll talk about schedulers in depth later."
3232
]
3333
},
3434
{
@@ -100,7 +100,7 @@
100100
"\n",
101101
"Those two increment calls *could* be called in parallel, because they are totally independent of one-another.\n",
102102
"\n",
103-
"We'll transform the `inc` and `add` functions using the `dask.delayed` function. When we call the delayed version by passing the arguments, exactly as before, but the original function isn't actually called yet - which is why the cell execution finishes very quickly.\n",
103+
"We'll transform the `inc` and `add` functions using the `dask.delayed` function. When we call the delayed version by passing the arguments, exactly as before, the original function isn't actually called yet - which is why the cell execution finishes very quickly.\n",
104104
"Instead, a *delayed object* is made, which keeps track of the function to call and the arguments to pass to it.\n"
105105
]
106106
},

01x_lazy.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@
5858
"cell_type": "markdown",
5959
"metadata": {},
6060
"source": [
61-
"Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02."
61+
"Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in the previous notebook."
6262
]
6363
},
6464
{
@@ -98,15 +98,15 @@
9898
"x = inc(15)\n",
9999
"y = inc(30)\n",
100100
"total = add(x, y)\n",
101-
"# incx, incy and total are all delayed objects. \n",
102-
"# They contain a prescription of how to execute"
101+
"# x, y and total are all delayed objects. \n",
102+
"# They contain a prescription of how to carry out the computation"
103103
]
104104
},
105105
{
106106
"cell_type": "markdown",
107107
"metadata": {},
108108
"source": [
109-
"Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.\n",
109+
"Calling a delayed function created a delayed object (`x, y, total`) which can be examined interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.\n",
110110
"\n",
111111
"We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results."
112112
]
@@ -388,7 +388,7 @@
388388
"cell_type": "markdown",
389389
"metadata": {},
390390
"source": [
391-
"There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.\n",
391+
"There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime).\n",
392392
"\n",
393393
"For example, we can launch processes and get their output as follows:\n",
394394
"```python\n",
@@ -556,9 +556,9 @@
556556
"cell_type": "markdown",
557557
"metadata": {},
558558
"source": [
559-
"Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.\n",
559+
"Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, in which the keys are unique task identifiers, and the values are the functions and inputs for calculation.\n",
560560
"\n",
561-
"`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html)."
561+
"`delayed` is a handy mechanism for creating the Dask graph, but the adventurous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html)."
562562
]
563563
},
564564
{

02_bag.ipynb

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -320,7 +320,9 @@
320320
"metadata": {},
321321
"outputs": [],
322322
"source": [
323-
"js.filter(lambda record: record['name'] == 'Alice').pluck('transactions').take(3)"
323+
"(js.filter(lambda record: record['name'] == 'Alice')\n",
324+
" .pluck('transactions')\n",
325+
" .take(3))"
324326
]
325327
},
326328
{
@@ -455,8 +457,7 @@
455457
"metadata": {},
456458
"outputs": [],
457459
"source": [
458-
"is_even = lambda x: x % 2\n",
459-
"b.foldby(is_even, binop=max, combine=max).compute()"
460+
"b.foldby(lambda x: x % 2, binop=max, combine=max).compute()"
460461
]
461462
},
462463
{
@@ -495,7 +496,7 @@
495496
"# This one is comparatively fast and produces the same result.\n",
496497
"from operator import add\n",
497498
"def incr(tot, _):\n",
498-
" return tot+1\n",
499+
" return tot + 1\n",
499500
"\n",
500501
"result = js.foldby(key='name', \n",
501502
" binop=incr, \n",
@@ -549,7 +550,7 @@
549550
"cell_type": "markdown",
550551
"metadata": {},
551552
"source": [
552-
"For the same reasons that Pandas is often faster than pure Python, `dask.dataframe` can be faster than `dask.bag`. We will work more with DataFrames later, but from for the bag point of view, they are frequently the end-point of the \"messy\" part of data ingestion—once the data can be made into a data-frame, then complex split-apply-combine logic will become much more straight-forward and efficient.\n",
553+
"For the same reasons that Pandas is often faster than pure Python, `dask.dataframe` can be faster than `dask.bag`. We will work more with DataFrames later, but from the point of view of a Bag, it is frequently the end-point of the \"messy\" part of data ingestion—once the data can be made into a data-frame, then complex split-apply-combine logic will become much more straight-forward and efficient.\n",
553554
"\n",
554555
"You can transform a bag with a simple tuple or flat dictionary structure into a `dask.dataframe` with the `to_dataframe` method."
555556
]
@@ -575,7 +576,7 @@
575576
"cell_type": "markdown",
576577
"metadata": {},
577578
"source": [
578-
"Using a Dask DataFrame, how long does it take to do our prior computation of numbers of people with the same name? It turns out that `dask.dataframe.groupby()` beats `dask.bag.groupby()` more than an order of magnitude; but it still cannot match `dask.bag.foldby()` for this case."
579+
"Using a Dask DataFrame, how long does it take to do our prior computation of numbers of people with the same name? It turns out that `dask.dataframe.groupby()` beats `dask.bag.groupby()` by more than an order of magnitude; but it still cannot match `dask.bag.foldby()` for this case."
579580
]
580581
},
581582
{
@@ -608,7 +609,7 @@
608609
"outputs": [],
609610
"source": [
610611
"def denormalize(record):\n",
611-
" # returns a list for every nested item, each transaction of each person\n",
612+
" # returns a list for each person, one item per transaction\n",
612613
" return [{'id': record['id'], \n",
613614
" 'name': record['name'], \n",
614615
" 'amount': transaction['amount'], \n",

03_array.ipynb

Lines changed: 23 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@
115115
"cell_type": "markdown",
116116
"metadata": {},
117117
"source": [
118-
"Before using dask, lets consider the concept of blocked algorithms. We can compute the sum of a large number of elements by loading them chunk-by-chunk, and keeping a running total.\n",
118+
"Before using dask, let's consider the concept of blocked algorithms. We can compute the sum of a large number of elements by loading them chunk-by-chunk, and keeping a running total.\n",
119119
"\n",
120120
"Here we compute the sum of this large array on disk by \n",
121121
"\n",
@@ -133,8 +133,8 @@
133133
"source": [
134134
"# Compute sum of large array, one million numbers at a time\n",
135135
"sums = []\n",
136-
"for i in range(0, 1000000000, 1000000):\n",
137-
" chunk = dset[i: i + 1000000] # pull out numpy array\n",
136+
"for i in range(0, 1_000_000_000, 1_000_000):\n",
137+
" chunk = dset[i: i + 1_000_000] # pull out numpy array\n",
138138
" sums.append(chunk.sum())\n",
139139
"\n",
140140
"total = sum(sums)\n",
@@ -152,7 +152,7 @@
152152
"cell_type": "markdown",
153153
"metadata": {},
154154
"source": [
155-
"Now that we've seen the simple example above try doing a slightly more complicated problem, compute the mean of the array, assuming for a moment that we don't happen to already know how many elements are in the data. You can do this by changing the code above with the following alterations:\n",
155+
"Now that we've seen the simple example above, try doing a slightly more complicated problem. Compute the mean of the array, assuming for a moment that we don't happen to already know how many elements are in the data. You can do this by changing the code above with the following alterations:\n",
156156
"\n",
157157
"1. Compute the sum of each block\n",
158158
"2. Compute the length of each block\n",
@@ -182,8 +182,8 @@
182182
"source": [
183183
"sums = []\n",
184184
"lengths = []\n",
185-
"for i in range(0, 1000000000, 1000000):\n",
186-
" chunk = dset[i: i + 1000000] # pull out numpy array\n",
185+
"for i in range(0, 1_000_000_000, 1_000_000):\n",
186+
" chunk = dset[i: i + 1_000_000] # pull out numpy array\n",
187187
" sums.append(chunk.sum())\n",
188188
" lengths.append(len(chunk))\n",
189189
"\n",
@@ -216,7 +216,7 @@
216216
"You can create a `dask.array` `Array` object with the `da.from_array` function. This function accepts\n",
217217
"\n",
218218
"1. `data`: Any object that supports NumPy slicing, like `dset`\n",
219-
"2. `chunks`: A chunk size to tell us how to block up our array, like `(1000000,)`"
219+
"2. `chunks`: A chunk size to tell us how to block up our array, like `(1_000_000,)`"
220220
]
221221
},
222222
{
@@ -226,15 +226,15 @@
226226
"outputs": [],
227227
"source": [
228228
"import dask.array as da\n",
229-
"x = da.from_array(dset, chunks=(1000000,))\n",
229+
"x = da.from_array(dset, chunks=(1_000_000,))\n",
230230
"x"
231231
]
232232
},
233233
{
234234
"cell_type": "markdown",
235235
"metadata": {},
236236
"source": [
237-
"** Manipulate `dask.array` object as you would a numpy array**"
237+
"**Manipulate `dask.array` object as you would a numpy array**"
238238
]
239239
},
240240
{
@@ -243,7 +243,7 @@
243243
"source": [
244244
"Now that we have an `Array` we perform standard numpy-style computations like arithmetic, mathematics, slicing, reductions, etc..\n",
245245
"\n",
246-
"The interface is familiar, but the actual work is different. dask_array.sum() does not do the same thing as numpy_array.sum()."
246+
"The interface is familiar, but the actual work is different. `dask_array.sum()` does not do the same thing as `numpy_array.sum()`."
247247
]
248248
},
249249
{
@@ -353,7 +353,7 @@
353353
"1. Use multiple cores in parallel\n",
354354
"2. Chain operations on a single blocks before moving on to the next one\n",
355355
"\n",
356-
"Dask.array translates your array operations into a graph of inter-related tasks with data dependencies between them. Dask then executes this graph in parallel with multiple threads. We'll discuss more about this in the next section.\n",
356+
"`Dask.array` translates your array operations into a graph of inter-related tasks with data dependencies between them. Dask then executes this graph in parallel with multiple threads. We'll discuss more about this in the next section.\n",
357357
"\n"
358358
]
359359
},
@@ -410,7 +410,7 @@
410410
"cell_type": "markdown",
411411
"metadata": {},
412412
"source": [
413-
"Performance comparision\n",
413+
"Performance comparison\n",
414414
"---------------------------\n",
415415
"\n",
416416
"The following experiment was performed on a heavy personal laptop. Your performance may vary. If you attempt the NumPy version then please ensure that you have more than 4GB of main memory."
@@ -544,13 +544,6 @@
544544
"dsets[0]"
545545
]
546546
},
547-
{
548-
"cell_type": "code",
549-
"execution_count": null,
550-
"metadata": {},
551-
"outputs": [],
552-
"source": []
553-
},
554547
{
555548
"cell_type": "code",
556549
"execution_count": null,
@@ -597,7 +590,11 @@
597590
{
598591
"cell_type": "code",
599592
"execution_count": null,
600-
"metadata": {},
593+
"metadata": {
594+
"jupyter": {
595+
"source_hidden": true
596+
}
597+
},
601598
"outputs": [],
602599
"source": [
603600
"arrays = [da.from_array(dset, chunks=(500, 500)) for dset in dsets]\n",
@@ -628,7 +625,11 @@
628625
{
629626
"cell_type": "code",
630627
"execution_count": null,
631-
"metadata": {},
628+
"metadata": {
629+
"jupyter": {
630+
"source_hidden": true
631+
}
632+
},
632633
"outputs": [],
633634
"source": [
634635
"x = da.stack(arrays, axis=0)\n",
@@ -783,7 +784,7 @@
783784
"cell_type": "markdown",
784785
"metadata": {},
785786
"source": [
786-
"The [Lennard-Jones](https://en.wikipedia.org/wiki/Lennard-Jones_potential) is used in partical simuluations in physics, chemistry and engineering. It is highly parallelizable.\n",
787+
"The [Lennard-Jones potential](https://en.wikipedia.org/wiki/Lennard-Jones_potential) is used in partical simuluations in physics, chemistry and engineering. It is highly parallelizable.\n",
787788
"\n",
788789
"First, we'll run and profile the Numpy version on 7,000 particles."
789790
]

04_dataframe.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
"source": [
2828
"<img src=\"images/pandas_logo.png\" align=\"right\" width=\"28%\">\n",
2929
"\n",
30-
"The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.\n",
30+
"The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame` API. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.\n",
3131
"\n",
3232
"**Related Documentation**\n",
3333
"\n",
@@ -500,7 +500,7 @@
500500
"cell_type": "markdown",
501501
"metadata": {},
502502
"source": [
503-
"But lets try by passing both to a single `compute` call."
503+
"But let's try by passing both to a single `compute` call."
504504
]
505505
},
506506
{
@@ -524,7 +524,7 @@
524524
"- the filter (`df[~df.Cancelled]`)\n",
525525
"- some of the necessary reductions (`sum`, `count`)\n",
526526
"\n",
527-
"To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):"
527+
"To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to save the graph to disk so that we can zoom in more easily):"
528528
]
529529
},
530530
{
@@ -560,7 +560,7 @@
560560
"Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:\n",
561561
"\n",
562562
"1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.\n",
563-
"2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup."
563+
"2. When Pandas releases the GIL `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup."
564564
]
565565
},
566566
{

05_distributed.ipynb

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -21,32 +21,32 @@
2121
"As we have seen so far, Dask allows you to simply construct graphs of tasks with dependencies, as well as have graphs created automatically for you using functional, Numpy or Pandas syntax on data collections. None of this would be very useful, if there weren't also a way to execute these graphs, in a parallel and memory-aware way. So far we have been calling `thing.compute()` or `dask.compute(thing)` without worrying what this entails. Now we will discuss the options available for that execution, and in particular, the distributed scheduler, which comes with additional functionality.\n",
2222
"\n",
2323
"Dask comes with four available schedulers:\n",
24-
"- \"threaded\": a scheduler backed by a thread pool\n",
24+
"- \"threaded\" (aka \"threading\"): a scheduler backed by a thread pool\n",
2525
"- \"processes\": a scheduler backed by a process pool\n",
2626
"- \"single-threaded\" (aka \"sync\"): a synchronous scheduler, good for debugging\n",
2727
"- distributed: a distributed scheduler for executing graphs on multiple machines, see below.\n",
2828
"\n",
2929
"To select one of these for computation, you can specify at the time of asking for a result, e.g.,\n",
3030
"```python\n",
3131
"myvalue.compute(scheduler=\"single-threaded\") # for debugging\n",
32-
"```"
33-
]
34-
},
35-
{
36-
"cell_type": "markdown",
37-
"metadata": {},
38-
"source": [
39-
"or set the current default, either temporarily or globally\n",
32+
"```\n",
33+
"\n",
34+
"You can also set a default scheduler either temporarily\n",
4035
"```python\n",
4136
"with dask.config.set(scheduler='processes'):\n",
4237
" # set temporarily for this block only\n",
38+
" # all compute calls within this block will use the specified scheduler\n",
4339
" myvalue.compute()\n",
40+
" anothervalue.compute()\n",
41+
"```\n",
4442
"\n",
45-
"dask.config.set(scheduler='processes')\n",
43+
"Or globally\n",
44+
"```python\n",
4645
"# set until further notice\n",
46+
"dask.config.set(scheduler='processes')\n",
4747
"```\n",
4848
"\n",
49-
"Lets see the difference for the familiar case of the flights data"
49+
"Let's try out a few schedulers on the familiar case of the flights data."
5050
]
5151
},
5252
{

0 commit comments

Comments
 (0)