Completely share: Mastering advanced scala pdf download
WATCHMEN 2009 TORRENT DOWNLOAD | 719 |
COD MODERN WARFARE PC DOWNLOAD FREE | 367 |
DOWNLOAD TRUMPET SHEET MUSIC FREE PDF | 809 |
Working with Scala and Spark Notebooks
Often the most frequent values or five-number summary are not sufficient to get the first understanding of the data. The term descriptive statistics is very generic and may refer to very complex ways to describe the data. Quantiles, a Paretto chart or, when more than one attribute is analyzed, correlations are also examples of descriptive statistics. When sharing all these ways to look at the data aggregates, in many cases, it is also important to share the specific computations to get to them.
Scala or Spark Notebook https://github.com/Bridgewater/scala-notebook, https://github.com/andypetrella/spark-notebook record the whole transformation path and the results can be shared as a JSON-based file. The Spark Notebook project can be downloaded from http://spark-notebook.io, and I will provide a sample file with the book. I will use Spark, which I will cover in more detail in Chapter 3, Working with Spark and MLlib.
For this particular example, Spark will run in the local mode. Even in the local mode Spark can utilize parallelism on your workstation, but it is limited to the number of cores and hyperthreads that can run on your laptop or workstation. With a simple configuration change, however, Spark can be pointed to a distributed set of machines and use resources across a distributed set of nodes.
Here is the set of commands to download the Spark Notebook and copy the necessary files from the code repository:
Now you can open the notebook at in your browser, as shown in the following screenshot:
Open the notebook by clicking on it. The statements are organized into cells and can be executed by clicking on the small right arrow at the top, as shown in the following screenshot, or run all cells at once by navigating to Cell | Run All:
First, we will look at the discrete variables. For example, to get the other observable attributes. This task would be totally impossible if distribution of the labels, issue the following code:
The first time I read the dataset, it took about a minute on MacBook Pro, but Spark caches the data in memory and the subsequent aggregation runs take only about a second. Spark Notebook provides you the distribution of the values, as shown in the following screenshot:
I can also look at crosstab counts for pairs of discrete variables, which gives me an idea of interdependencies between the variables using http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.DataFrameStatFunctionsâthe object does not support computing correlation measures such as chi-square yet:
However, we can see that the most popular service is private and it correlates well with the flag. Another way to analyze dependencies is to look at entries. For example, the and flags are clearly related to the SMTP and FTP traffic since all other entries are .
Of course, the most interesting correlations are with the target variable, but these are better discovered by supervised learning algorithms that I will cover in Chapter 3, Working with Spark and MLlib, and Chapter 5, Regression and Classification.
Analogously, we can compute correlations for numerical variables with the and functions (refer to Figure 01-6). In this case, the class supports the Pearson correlation coefficient. Alternatively, we can use the standard SQL syntax on the parquet file directly:
Finally, I promised you to compute percentiles. Computing percentiles usually involves sorting the whole dataset, which is expensive; however, if the tile is one of the first or the last ones, usually it is possible to optimize the computation:
Computing the exact percentiles for a more generic case is more computationally expensive and is provided as a part of the Spark Notebook example code.
-
-