<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Spark on ZRJ | 学习笔记</title>
        <link>https://blog.zrj.me/tags/spark/</link>
        <description>Recent content in Spark on ZRJ | 学习笔记</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>zh-CN</language>
        <lastBuildDate>Thu, 01 Nov 2018 16:51:45 +0800</lastBuildDate><atom:link href="https://blog.zrj.me/tags/spark/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>提升 maven 编译 scala 的速度</title>
        <link>https://blog.zrj.me/posts/2018-11-01-%E6%8F%90%E5%8D%87-maven-%E7%BC%96%E8%AF%91-scala-%E7%9A%84%E9%80%9F%E5%BA%A6/</link>
        <pubDate>Thu, 01 Nov 2018 16:51:45 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2018-11-01-%E6%8F%90%E5%8D%87-maven-%E7%BC%96%E8%AF%91-scala-%E7%9A%84%E9%80%9F%E5%BA%A6/</guid>
        <description>&lt;p&gt;我们 spark 的计算任务是用 scala 来写的，maven 编译，但是随着 scala 源文件的数量越来越多，（现在一个 project 已经 800+ 源文件了），编译速度成为了一个很大瓶颈，编译一次都要 10+ 分钟，大大影响了开发效率&lt;/p&gt;
&lt;p&gt;首先想到的是，排除掉一些与自己计算任务无关的代码，看看能不能加速，从这里看到 &lt;a class=&#34;link&#34; href=&#34;https://stackoverflow.com/questions/17920920/maven-excluding-java-files-in-compilation&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://stackoverflow.com/questions/17920920/maven-excluding-java-files-in-compilation&lt;/a&gt; 通过 exclude 的方式可以排除&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-xml&#34; data-lang=&#34;xml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;&amp;lt;plugin&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;nt&#34;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.maven.plugins&lt;span class=&#34;nt&#34;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;nt&#34;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;maven-compiler-plugin&lt;span class=&#34;nt&#34;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;nt&#34;&gt;&amp;lt;configuration&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;excludes&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nt&#34;&gt;&amp;lt;exclude&amp;gt;&lt;/span&gt;**/api/test/omi/*.java&lt;span class=&#34;nt&#34;&gt;&amp;lt;/exclude&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;/excludes&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;nt&#34;&gt;&amp;lt;/configuration&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;&amp;lt;/plugin&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;但是我实际测试下来，发现并没有什么卵用，下面评论有人提到，是否是 src/main/scala 前缀的问题，但是不管我加不加这个前缀，都一样没用&lt;/p&gt;
&lt;p&gt;然后重新换个方向放狗搜，看到这里 &lt;a class=&#34;link&#34; href=&#34;https://www.lightbend.com/blog/zinc-and-incremental-compilation&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://www.lightbend.com/blog/zinc-and-incremental-compilation&lt;/a&gt; 这个文件介绍了 zinc 的神奇功效，堪比大力丸，然后又看到这里 &lt;a class=&#34;link&#34; href=&#34;http://hohonuuli.blogspot.com/2012/11/fast-scala-compilation-with-maven.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://hohonuuli.blogspot.com/2012/11/fast-scala-compilation-with-maven.html&lt;/a&gt; 这个文章介绍了 zinc server 的具体安装配置方式，于是就照葫芦画瓢，试试呗&lt;/p&gt;
&lt;p&gt;首先从 &lt;a class=&#34;link&#34; href=&#34;https://github.com/typesafehub/zinc&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/typesafehub/zinc&lt;/a&gt; 下载 zinc 的 zip 包，然而，发现这个东西在 Windows 上不好搞，卒&lt;/p&gt;
&lt;p&gt;最终通过修改 eclipse 的启动参数 -Xmx8G -Xms4G 来加速，囧&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;历史评论&#34;&gt;历史评论
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;继续折腾 maven 编译提速 | ZRJ&lt;/strong&gt; (2019-03-28 17:43:26):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;[…] &lt;a class=&#34;link&#34; href=&#34;https://zrj.me/archives/1886&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://zrj.me/archives/1886&lt;/a&gt; […]&lt;/p&gt;
&lt;/blockquote&gt;
</description>
        </item>
        <item>
        <title>spark word count 和 streaming 的例子</title>
        <link>https://blog.zrj.me/posts/2018-06-26-spark-word-count-%E5%92%8C-streaming-%E7%9A%84%E4%BE%8B%E5%AD%90/</link>
        <pubDate>Tue, 26 Jun 2018 10:24:49 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2018-06-26-spark-word-count-%E5%92%8C-streaming-%E7%9A%84%E4%BE%8B%E5%AD%90/</guid>
        <description>&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-scala&#34; data-lang=&#34;scala&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;package&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;me.zrj.test.test20170731&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.SparkContext&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.SparkConf&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;java.util.Properties&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.log4j.PropertyConfigurator&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.streaming.StreamingContext&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.streaming.Seconds&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;object&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;SparkWordCount&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;logger&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;this&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;getClass&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;properties&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;clazz&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;Class&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;_&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;],&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;properties&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;Properties&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;org.apache.log4j.Logger&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;properties&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;==&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;c1&#34;&gt;// 配置日志选项
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;prop&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;Properties&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;n&#34;&gt;prop&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setProperty&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;log4j.logger.me.zrj.test.test20170731&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;&amp;#34;INFO, stdout&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;n&#34;&gt;prop&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setProperty&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;log4j.appender.stdout&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;&amp;#34;org.apache.log4j.ConsoleAppender&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;n&#34;&gt;prop&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setProperty&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;log4j.appender.stdout.layout&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;&amp;#34;org.apache.log4j.PatternLayout&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;n&#34;&gt;prop&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setProperty&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;log4j.appender.stdout.layout.ConversionPattern&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;&amp;#34;[%d][%-5p]%m -- %F:%L%n&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;c1&#34;&gt;// 让日志配置生效
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nc&#34;&gt;PropertyConfigurator&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;configure&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;prop&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;else&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nc&#34;&gt;PropertyConfigurator&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;configure&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;properties&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;org&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;apache&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;log4j&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nc&#34;&gt;Logger&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;clazz&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;wordCount&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;    
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;sc&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;SparkContext&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;SparkConf&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;().&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setMaster&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;local[2]&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setAppName&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;Spark Count&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;textFile&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;sc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;textFile&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;file:///D://downloads//hdfs-site.xml&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;counts&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;textFile&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;flatMap&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;line&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;line&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;split&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34; &amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;                     &lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;map&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;word&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;word&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;                     &lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;reduceByKey&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;_&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;+&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;_&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;res&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;counts&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;collect&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sortBy&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;_&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;_2&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;slice&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;10&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;logger&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;s&amp;#34;res &lt;/span&gt;&lt;span class=&#34;si&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;res&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;toList&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;}&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;socketStreaming&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;serverIP&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;&amp;#34;192.168.56.101&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;serverPort&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;9999&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// 创建StreamingContext，1秒一个批次
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;StreamingContext&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;SparkConf&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;().&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setMaster&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;local[2]&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setAppName&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;Spark Count&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;),&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;Seconds&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;));&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// 获得一个DStream负责连接 监听端口:地址
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;lines&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;socketTextStream&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;serverIP&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;serverPort&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;);&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// 对每一行数据执行Split操作
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;words&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;lines&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;flatMap&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;_&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;split&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34; &amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;toSeq&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;);&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// 统计word的数量
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;pairs&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;words&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;map&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;word&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;word&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;));&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;wordCounts&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;pairs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;reduceByKey&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;_&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;+&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;_&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;);&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// 输出结果
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;wordCounts&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;print&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;();&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;start&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;();&lt;/span&gt;             &lt;span class=&#34;c1&#34;&gt;// 开始
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;awaitTermination&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;();&lt;/span&gt;  &lt;span class=&#34;c1&#34;&gt;// 计算完毕退出
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;main&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;args&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;Array&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;kt&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;])&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;logger&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;s&amp;#34;starting spark...&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;wordCount&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;logger&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;s&amp;#34;program terminated&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;);&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;跑起来之后，word count 的输出是这样的&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[2018-06-26 10:12:31,086][INFO ]starting spark... -- SparkWordCount.scala:62
Using Spark&amp;#39;s default log4j profile: org/apache/spark/log4j-defaults.properties
18/06/26 10:12:31 INFO SparkContext: Running Spark version 1.6.1
18/06/26 10:12:31 INFO SecurityManager: Changing view acls to: adenzhang
18/06/26 10:12:31 INFO SecurityManager: Changing modify acls to: adenzhang
18/06/26 10:12:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(adenzhang); users with modify permissions: Set(adenzhang)
18/06/26 10:12:32 INFO Utils: Successfully started service &amp;#39;sparkDriver&amp;#39; on port 53504.
18/06/26 10:12:32 INFO Slf4jLogger: Slf4jLogger started
18/06/26 10:12:32 INFO Remoting: Starting remoting
18/06/26 10:12:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.41.88.38:53517]
18/06/26 10:12:32 INFO Utils: Successfully started service &amp;#39;sparkDriverActorSystem&amp;#39; on port 53517.
18/06/26 10:12:32 INFO SparkEnv: Registering MapOutputTracker
18/06/26 10:12:32 INFO SparkEnv: Registering BlockManagerMaster
18/06/26 10:12:32 INFO DiskBlockManager: Created local directory at C:\Users\adenzhang\AppData\Local\Temp\blockmgr-fa868aad-7249-4bf9-934d-84118c5ee523
18/06/26 10:12:32 INFO MemoryStore: MemoryStore started with capacity 2.4 GB
18/06/26 10:12:32 INFO SparkEnv: Registering OutputCommitCoordinator
18/06/26 10:12:32 INFO Utils: Successfully started service &amp;#39;SparkUI&amp;#39; on port 4040.
18/06/26 10:12:32 INFO SparkUI: Started SparkUI at http://10.41.88.38:4040
18/06/26 10:12:32 INFO Executor: Starting executor ID driver on host localhost
18/06/26 10:12:32 INFO Utils: Successfully started service &amp;#39;org.apache.spark.network.netty.NettyBlockTransferService&amp;#39; on port 53530.
18/06/26 10:12:32 INFO NettyBlockTransferService: Server created on 53530
18/06/26 10:12:32 INFO BlockManagerMaster: Trying to register BlockManager
18/06/26 10:12:32 INFO BlockManagerMasterEndpoint: Registering block manager localhost:53530 with 2.4 GB RAM, BlockManagerId(driver, localhost, 53530)
18/06/26 10:12:32 INFO BlockManagerMaster: Registered BlockManager
18/06/26 10:12:33 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 107.7 KB, free 107.7 KB)
18/06/26 10:12:33 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 9.8 KB, free 117.5 KB)
18/06/26 10:12:33 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:53530 (size: 9.8 KB, free: 2.4 GB)
18/06/26 10:12:33 INFO SparkContext: Created broadcast 0 from textFile at SparkWordCount.scala:30
18/06/26 10:12:33 INFO SparkContext: Starting job: collect at SparkWordCount.scala:34
18/06/26 10:12:33 INFO DAGScheduler: Registering RDD 3 (map at SparkWordCount.scala:32)
18/06/26 10:12:33 INFO DAGScheduler: Got job 0 (collect at SparkWordCount.scala:34) with 2 output partitions
18/06/26 10:12:33 INFO DAGScheduler: Final stage: ResultStage 1 (collect at SparkWordCount.scala:34)
18/06/26 10:12:33 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
18/06/26 10:12:33 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
18/06/26 10:12:33 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at map at SparkWordCount.scala:32), which has no missing parents
18/06/26 10:12:33 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.1 KB, free 121.6 KB)
18/06/26 10:12:33 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KB, free 123.9 KB)
18/06/26 10:12:33 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:53530 (size: 2.3 KB, free: 2.4 GB)
18/06/26 10:12:33 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1015
18/06/26 10:12:33 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at map at SparkWordCount.scala:32)
18/06/26 10:12:33 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
18/06/26 10:12:33 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2123 bytes)
18/06/26 10:12:33 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2123 bytes)
18/06/26 10:12:33 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
18/06/26 10:12:33 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
18/06/26 10:12:33 INFO HadoopRDD: Input split: file:/D:/downloads/hdfs-site.xml:8809+8809
18/06/26 10:12:33 INFO HadoopRDD: Input split: file:/D:/downloads/hdfs-site.xml:0+8809
18/06/26 10:12:33 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 2254 bytes result sent to driver
18/06/26 10:12:33 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2254 bytes result sent to driver
18/06/26 10:12:33 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 141 ms on localhost (1/2)
18/06/26 10:12:33 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 158 ms on localhost (2/2)
18/06/26 10:12:33 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
18/06/26 10:12:33 INFO DAGScheduler: ShuffleMapStage 0 (map at SparkWordCount.scala:32) finished in 0.170 s
18/06/26 10:12:33 INFO DAGScheduler: looking for newly runnable stages
18/06/26 10:12:33 INFO DAGScheduler: running: Set()
18/06/26 10:12:33 INFO DAGScheduler: waiting: Set(ResultStage 1)
18/06/26 10:12:33 INFO DAGScheduler: failed: Set()
18/06/26 10:12:33 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[4] at reduceByKey at SparkWordCount.scala:33), which has no missing parents
18/06/26 10:12:33 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.6 KB, free 126.5 KB)
18/06/26 10:12:33 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1603.0 B, free 128.1 KB)
18/06/26 10:12:33 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:53530 (size: 1603.0 B, free: 2.4 GB)
18/06/26 10:12:33 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1015
18/06/26 10:12:33 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (ShuffledRDD[4] at reduceByKey at SparkWordCount.scala:33)
18/06/26 10:12:33 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
18/06/26 10:12:33 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, partition 0,NODE_LOCAL, 1894 bytes)
18/06/26 10:12:33 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, partition 1,NODE_LOCAL, 1894 bytes)
18/06/26 10:12:33 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
18/06/26 10:12:33 INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
18/06/26 10:12:33 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks out of 2 blocks
18/06/26 10:12:33 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks out of 2 blocks
18/06/26 10:12:33 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 3 ms
18/06/26 10:12:33 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 3 ms
18/06/26 10:12:33 INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 9200 bytes result sent to driver
18/06/26 10:12:33 INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 8237 bytes result sent to driver
18/06/26 10:12:33 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 44 ms on localhost (1/2)
18/06/26 10:12:33 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 46 ms on localhost (2/2)
18/06/26 10:12:33 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
18/06/26 10:12:33 INFO DAGScheduler: ResultStage 1 (collect at SparkWordCount.scala:34) finished in 0.046 s
18/06/26 10:12:33 INFO DAGScheduler: Job 0 finished: collect at SparkWordCount.scala:34, took 0.294973 s
[2018-06-26 10:12:33,654][INFO ]res List((,1926), (&amp;lt;/property&amp;gt;,107), (&amp;lt;property&amp;gt;,107), (the,15), (&amp;lt;value&amp;gt;org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider&amp;lt;/value&amp;gt;,13), (&amp;lt;value&amp;gt;nn1,nn2,nn3&amp;lt;/value&amp;gt;,9), (is,5), (&amp;lt;value&amp;gt;nn1,nn2&amp;lt;/value&amp;gt;,4), (will,4), (port,4)) -- SparkWordCount.scala:35
18/06/26 10:12:33 INFO SparkWordCount$: res List((,1926), (&amp;lt;/property&amp;gt;,107), (&amp;lt;property&amp;gt;,107), (the,15), (&amp;lt;value&amp;gt;org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider&amp;lt;/value&amp;gt;,13), (&amp;lt;value&amp;gt;nn1,nn2,nn3&amp;lt;/value&amp;gt;,9), (is,5), (&amp;lt;value&amp;gt;nn1,nn2&amp;lt;/value&amp;gt;,4), (will,4), (port,4))
18/06/26 10:12:33 INFO SparkWordCount$: program terminated
[2018-06-26 10:12:33,655][INFO ]program terminated -- SparkWordCount.scala:64
18/06/26 10:12:33 INFO SparkContext: Invoking stop() from shutdown hook
18/06/26 10:12:33 INFO SparkUI: Stopped Spark web UI at http://10.41.88.38:4040
18/06/26 10:12:33 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/06/26 10:12:33 INFO MemoryStore: MemoryStore cleared
18/06/26 10:12:33 INFO BlockManager: BlockManager stopped
18/06/26 10:12:33 INFO BlockManagerMaster: BlockManagerMaster stopped
18/06/26 10:12:33 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/06/26 10:12:33 INFO SparkContext: Successfully stopped SparkContext
18/06/26 10:12:33 INFO ShutdownHookManager: Shutdown hook called
18/06/26 10:12:33 INFO ShutdownHookManager: Deleting directory C:\Users\adenzhang\AppData\Local\Temp\spark-104c1b9e-bf57-442e-981a-9d6461567197
18/06/26 10:12:33 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
18/06/26 10:12:33 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;实时的任务跑起来的日志输出如下&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[2018-06-26 10:55:44,901][INFO ]starting spark... -- SparkWordCount.scala:62
Using Spark&amp;#39;s default log4j profile: org/apache/spark/log4j-defaults.properties
18/06/26 10:55:45 INFO SparkContext: Running Spark version 1.6.1
18/06/26 10:55:45 INFO SecurityManager: Changing view acls to: adenzhang
18/06/26 10:55:45 INFO SecurityManager: Changing modify acls to: adenzhang
18/06/26 10:55:45 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(adenzhang); users with modify permissions: Set(adenzhang)
18/06/26 10:55:45 INFO Utils: Successfully started service &amp;#39;sparkDriver&amp;#39; on port 55640.
18/06/26 10:55:46 INFO Slf4jLogger: Slf4jLogger started
18/06/26 10:55:46 INFO Remoting: Starting remoting
18/06/26 10:55:46 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.41.88.38:55654]
18/06/26 10:55:46 INFO Utils: Successfully started service &amp;#39;sparkDriverActorSystem&amp;#39; on port 55654.
18/06/26 10:55:46 INFO SparkEnv: Registering MapOutputTracker
18/06/26 10:55:46 INFO SparkEnv: Registering BlockManagerMaster
18/06/26 10:55:46 INFO DiskBlockManager: Created local directory at C:\Users\adenzhang\AppData\Local\Temp\blockmgr-ed31e89a-8ef4-408c-91ec-33d881b30755
18/06/26 10:55:46 INFO MemoryStore: MemoryStore started with capacity 2.4 GB
18/06/26 10:55:46 INFO SparkEnv: Registering OutputCommitCoordinator
18/06/26 10:55:46 INFO Utils: Successfully started service &amp;#39;SparkUI&amp;#39; on port 4040.
18/06/26 10:55:46 INFO SparkUI: Started SparkUI at http://10.41.88.38:4040
18/06/26 10:55:46 INFO Executor: Starting executor ID driver on host localhost
18/06/26 10:55:46 INFO Utils: Successfully started service &amp;#39;org.apache.spark.network.netty.NettyBlockTransferService&amp;#39; on port 55667.
18/06/26 10:55:46 INFO NettyBlockTransferService: Server created on 55667
18/06/26 10:55:46 INFO BlockManagerMaster: Trying to register BlockManager
18/06/26 10:55:46 INFO BlockManagerMasterEndpoint: Registering block manager localhost:55667 with 2.4 GB RAM, BlockManagerId(driver, localhost, 55667)
18/06/26 10:55:46 INFO BlockManagerMaster: Registered BlockManager
18/06/26 10:55:47 INFO ReceiverTracker: Starting 1 receivers
18/06/26 10:55:47 INFO ReceiverTracker: ReceiverTracker started
18/06/26 10:55:47 INFO ForEachDStream: metadataCleanupDelay = -1
18/06/26 10:55:47 INFO ShuffledDStream: metadataCleanupDelay = -1
18/06/26 10:55:47 INFO MappedDStream: metadataCleanupDelay = -1
18/06/26 10:55:47 INFO FlatMappedDStream: metadataCleanupDelay = -1
18/06/26 10:55:47 INFO SocketInputDStream: metadataCleanupDelay = -1
18/06/26 10:55:47 INFO SocketInputDStream: Slide time = 1000 ms
18/06/26 10:55:47 INFO SocketInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
18/06/26 10:55:47 INFO SocketInputDStream: Checkpoint interval = null
18/06/26 10:55:47 INFO SocketInputDStream: Remember duration = 1000 ms
18/06/26 10:55:47 INFO SocketInputDStream: Initialized and validated org.apache.spark.streaming.dstream.SocketInputDStream@258773e4
18/06/26 10:55:47 INFO FlatMappedDStream: Slide time = 1000 ms
18/06/26 10:55:47 INFO FlatMappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
18/06/26 10:55:47 INFO FlatMappedDStream: Checkpoint interval = null
18/06/26 10:55:47 INFO FlatMappedDStream: Remember duration = 1000 ms
18/06/26 10:55:47 INFO FlatMappedDStream: Initialized and validated org.apache.spark.streaming.dstream.FlatMappedDStream@509141bb
18/06/26 10:55:47 INFO MappedDStream: Slide time = 1000 ms
18/06/26 10:55:47 INFO MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
18/06/26 10:55:47 INFO MappedDStream: Checkpoint interval = null
18/06/26 10:55:47 INFO MappedDStream: Remember duration = 1000 ms
18/06/26 10:55:47 INFO MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@3a762c6a
18/06/26 10:55:47 INFO ShuffledDStream: Slide time = 1000 ms
18/06/26 10:55:47 INFO ShuffledDStream: Storage level = StorageLevel(false, false, false, false, 1)
18/06/26 10:55:47 INFO ShuffledDStream: Checkpoint interval = null
18/06/26 10:55:47 INFO ShuffledDStream: Remember duration = 1000 ms
18/06/26 10:55:47 INFO ShuffledDStream: Initialized and validated org.apache.spark.streaming.dstream.ShuffledDStream@19eb0f26
18/06/26 10:55:47 INFO ForEachDStream: Slide time = 1000 ms
18/06/26 10:55:47 INFO ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1)
18/06/26 10:55:47 INFO ForEachDStream: Checkpoint interval = null
18/06/26 10:55:47 INFO ForEachDStream: Remember duration = 1000 ms
18/06/26 10:55:47 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@7ed47ac
18/06/26 10:55:47 INFO RecurringTimer: Started timer for JobGenerator at time 1529981748000
18/06/26 10:55:47 INFO JobGenerator: Started JobGenerator at 1529981748000 ms
18/06/26 10:55:47 INFO JobScheduler: Started JobScheduler
18/06/26 10:55:47 INFO StreamingContext: StreamingContext started
18/06/26 10:55:47 INFO ReceiverTracker: Receiver 0 started
18/06/26 10:55:47 INFO DAGScheduler: Got job 0 (start at SparkWordCount.scala:57) with 1 output partitions
18/06/26 10:55:47 INFO DAGScheduler: Final stage: ResultStage 0 (start at SparkWordCount.scala:57)
18/06/26 10:55:47 INFO DAGScheduler: Parents of final stage: List()
18/06/26 10:55:47 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:47 INFO DAGScheduler: Submitting ResultStage 0 (Receiver 0 ParallelCollectionRDD[0] at makeRDD at ReceiverTracker.scala:588), which has no missing parents
18/06/26 10:55:47 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 34.1 KB, free 34.1 KB)
18/06/26 10:55:47 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 11.0 KB, free 45.1 KB)
18/06/26 10:55:47 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:55667 (size: 11.0 KB, free: 2.4 GB)
18/06/26 10:55:47 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:47 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (Receiver 0 ParallelCollectionRDD[0] at makeRDD at ReceiverTracker.scala:588)
18/06/26 10:55:47 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
18/06/26 10:55:47 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2642 bytes)
18/06/26 10:55:47 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
18/06/26 10:55:47 INFO RecurringTimer: Started timer for BlockGenerator at time 1529981747400
18/06/26 10:55:47 INFO BlockGenerator: Started BlockGenerator
18/06/26 10:55:47 INFO BlockGenerator: Started block pushing thread
18/06/26 10:55:47 INFO ReceiverTracker: Registered receiver for stream 0 from 10.41.88.38:55640
18/06/26 10:55:47 INFO ReceiverSupervisorImpl: Starting receiver
18/06/26 10:55:47 INFO ReceiverSupervisorImpl: Called receiver onStart
18/06/26 10:55:47 INFO ReceiverSupervisorImpl: Waiting for receiver to be stopped
18/06/26 10:55:47 INFO SocketReceiver: Connecting to 192.168.56.101:9999
18/06/26 10:55:48 INFO JobScheduler: Added jobs for time 1529981748000 ms
18/06/26 10:55:48 INFO JobScheduler: Starting job streaming job 1529981748000 ms.0 from job set of time 1529981748000 ms
18/06/26 10:55:48 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:48 INFO DAGScheduler: Registering RDD 3 (map at SparkWordCount.scala:51)
18/06/26 10:55:48 INFO DAGScheduler: Got job 1 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:48 INFO DAGScheduler: Final stage: ResultStage 2 (print at SparkWordCount.scala:55)
18/06/26 10:55:48 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 1)
18/06/26 10:55:48 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:48 INFO DAGScheduler: Submitting ResultStage 2 (ShuffledRDD[4] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:48 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.6 KB, free 47.7 KB)
18/06/26 10:55:48 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1641.0 B, free 49.3 KB)
18/06/26 10:55:48 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:55667 (size: 1641.0 B, free: 2.4 GB)
18/06/26 10:55:48 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:48 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (ShuffledRDD[4] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:48 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
18/06/26 10:55:48 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 1, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:48 INFO Executor: Running task 0.0 in stage 2.0 (TID 1)
18/06/26 10:55:48 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:48 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 3 ms
18/06/26 10:55:48 INFO Executor: Finished task 0.0 in stage 2.0 (TID 1). 1161 bytes result sent to driver
18/06/26 10:55:48 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 1) in 29 ms on localhost (1/1)
18/06/26 10:55:48 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 
18/06/26 10:55:48 INFO DAGScheduler: ResultStage 2 (print at SparkWordCount.scala:55) finished in 0.031 s
18/06/26 10:55:48 INFO DAGScheduler: Job 1 finished: print at SparkWordCount.scala:55, took 0.045668 s
18/06/26 10:55:48 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:48 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 82 bytes
18/06/26 10:55:48 INFO DAGScheduler: Got job 2 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:48 INFO DAGScheduler: Final stage: ResultStage 4 (print at SparkWordCount.scala:55)
18/06/26 10:55:48 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 3)
18/06/26 10:55:48 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:48 INFO DAGScheduler: Submitting ResultStage 4 (ShuffledRDD[4] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:48 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.6 KB, free 51.9 KB)
18/06/26 10:55:48 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1641.0 B, free 53.5 KB)
18/06/26 10:55:48 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:55667 (size: 1641.0 B, free: 2.4 GB)
18/06/26 10:55:48 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:48 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 4 (ShuffledRDD[4] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:48 INFO TaskSchedulerImpl: Adding task set 4.0 with 1 tasks
18/06/26 10:55:48 INFO TaskSetManager: Starting task 0.0 in stage 4.0 (TID 2, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:48 INFO Executor: Running task 0.0 in stage 4.0 (TID 2)
18/06/26 10:55:48 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:48 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:48 INFO Executor: Finished task 0.0 in stage 4.0 (TID 2). 1161 bytes result sent to driver
18/06/26 10:55:48 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 2) in 4 ms on localhost (1/1)
18/06/26 10:55:48 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool 
18/06/26 10:55:48 INFO DAGScheduler: ResultStage 4 (print at SparkWordCount.scala:55) finished in 0.005 s
18/06/26 10:55:48 INFO DAGScheduler: Job 2 finished: print at SparkWordCount.scala:55, took 0.013113 s
-------------------------------------------
Time: 1529981748000 ms
-------------------------------------------
18/06/26 10:55:48 INFO JobScheduler: Finished job streaming job 1529981748000 ms.0 from job set of time 1529981748000 ms
18/06/26 10:55:48 INFO JobScheduler: Total delay: 0.148 s for time 1529981748000 ms (execution: 0.081 s)

18/06/26 10:55:48 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
18/06/26 10:55:48 INFO InputInfoTracker: remove old batch metadata: 
18/06/26 10:55:48 WARN ReceiverSupervisorImpl: Restarting receiver with delay 2000 ms: Error connecting to 192.168.56.101:9999
java.net.ConnectException: Connection refused: connect
	at java.net.DualStackPlainSocketImpl.connect0(Native Method)
	at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at java.net.Socket.connect(Socket.java:538)
	at java.net.Socket.&amp;lt;init&amp;gt;(Socket.java:434)
	at java.net.Socket.&amp;lt;init&amp;gt;(Socket.java:211)
	at org.apache.spark.streaming.dstream.SocketReceiver.receive(SocketInputDStream.scala:73)
	at org.apache.spark.streaming.dstream.SocketReceiver$$anon$2.run(SocketInputDStream.scala:59)
18/06/26 10:55:48 INFO ReceiverSupervisorImpl: Stopping receiver with message: Restarting receiver with delay 2000ms: Error connecting to 192.168.56.101:9999: java.net.ConnectException: Connection refused: connect
18/06/26 10:55:48 INFO ReceiverSupervisorImpl: Called receiver onStop
18/06/26 10:55:48 INFO ReceiverSupervisorImpl: Deregistering receiver 0
18/06/26 10:55:48 ERROR ReceiverTracker: Deregistered receiver for stream 0: Restarting receiver with delay 2000ms: Error connecting to 192.168.56.101:9999 - java.net.ConnectException: Connection refused: connect
	at java.net.DualStackPlainSocketImpl.connect0(Native Method)
	at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at java.net.Socket.connect(Socket.java:538)
	at java.net.Socket.&amp;lt;init&amp;gt;(Socket.java:434)
	at java.net.Socket.&amp;lt;init&amp;gt;(Socket.java:211)
	at org.apache.spark.streaming.dstream.SocketReceiver.receive(SocketInputDStream.scala:73)
	at org.apache.spark.streaming.dstream.SocketReceiver$$anon$2.run(SocketInputDStream.scala:59)

18/06/26 10:55:48 INFO ReceiverSupervisorImpl: Stopped receiver 0
18/06/26 10:55:49 INFO JobScheduler: Added jobs for time 1529981749000 ms
18/06/26 10:55:49 INFO JobScheduler: Starting job streaming job 1529981749000 ms.0 from job set of time 1529981749000 ms
18/06/26 10:55:49 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:49 INFO DAGScheduler: Registering RDD 7 (map at SparkWordCount.scala:51)
18/06/26 10:55:49 INFO DAGScheduler: Got job 3 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:49 INFO DAGScheduler: Final stage: ResultStage 6 (print at SparkWordCount.scala:55)
18/06/26 10:55:49 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 5)
18/06/26 10:55:49 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:49 INFO DAGScheduler: Submitting ResultStage 6 (ShuffledRDD[8] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:49 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 2.6 KB, free 56.1 KB)
18/06/26 10:55:49 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 1632.0 B, free 56.1 KB)
18/06/26 10:55:49 INFO BlockManagerInfo: Removed broadcast_2_piece0 on localhost:55667 in memory (size: 1641.0 B, free: 2.4 GB)
18/06/26 10:55:49 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on localhost:55667 (size: 1632.0 B, free: 2.4 GB)
18/06/26 10:55:49 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:49 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 6 (ShuffledRDD[8] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:49 INFO TaskSchedulerImpl: Adding task set 6.0 with 1 tasks
18/06/26 10:55:49 INFO TaskSetManager: Starting task 0.0 in stage 6.0 (TID 3, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:49 INFO Executor: Running task 0.0 in stage 6.0 (TID 3)
18/06/26 10:55:49 INFO ContextCleaner: Cleaned accumulator 3
18/06/26 10:55:49 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:49 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
18/06/26 10:55:49 INFO BlockManagerInfo: Removed broadcast_1_piece0 on localhost:55667 in memory (size: 1641.0 B, free: 2.4 GB)
18/06/26 10:55:49 INFO ContextCleaner: Cleaned accumulator 2
18/06/26 10:55:49 INFO Executor: Finished task 0.0 in stage 6.0 (TID 3). 1161 bytes result sent to driver
18/06/26 10:55:49 INFO TaskSetManager: Finished task 0.0 in stage 6.0 (TID 3) in 5 ms on localhost (1/1)
18/06/26 10:55:49 INFO TaskSchedulerImpl: Removed TaskSet 6.0, whose tasks have all completed, from pool 
18/06/26 10:55:49 INFO DAGScheduler: ResultStage 6 (print at SparkWordCount.scala:55) finished in 0.006 s
18/06/26 10:55:49 INFO DAGScheduler: Job 3 finished: print at SparkWordCount.scala:55, took 0.018014 s
18/06/26 10:55:49 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:49 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 1 is 82 bytes
18/06/26 10:55:49 INFO DAGScheduler: Got job 4 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:49 INFO DAGScheduler: Final stage: ResultStage 8 (print at SparkWordCount.scala:55)
18/06/26 10:55:49 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 7)
18/06/26 10:55:49 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:49 INFO DAGScheduler: Submitting ResultStage 8 (ShuffledRDD[8] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:49 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 2.6 KB, free 51.8 KB)
18/06/26 10:55:49 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 1632.0 B, free 53.4 KB)
18/06/26 10:55:49 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:55667 (size: 1632.0 B, free: 2.4 GB)
18/06/26 10:55:49 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:49 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 8 (ShuffledRDD[8] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:49 INFO TaskSchedulerImpl: Adding task set 8.0 with 1 tasks
18/06/26 10:55:49 INFO TaskSetManager: Starting task 0.0 in stage 8.0 (TID 4, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:49 INFO Executor: Running task 0.0 in stage 8.0 (TID 4)
18/06/26 10:55:49 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:49 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:49 INFO Executor: Finished task 0.0 in stage 8.0 (TID 4). 1161 bytes result sent to driver
18/06/26 10:55:49 INFO DAGScheduler: ResultStage 8 (print at SparkWordCount.scala:55) finished in 0.004 s
18/06/26 10:55:49 INFO TaskSetManager: Finished task 0.0 in stage 8.0 (TID 4) in 4 ms on localhost (1/1)
18/06/26 10:55:49 INFO DAGScheduler: Job 4 finished: print at SparkWordCount.scala:55, took 0.009080 s
18/06/26 10:55:49 INFO TaskSchedulerImpl: Removed TaskSet 8.0, whose tasks have all completed, from pool 
-------------------------------------------
Time: 1529981749000 ms
-------------------------------------------

18/06/26 10:55:49 INFO JobScheduler: Finished job streaming job 1529981749000 ms.0 from job set of time 1529981749000 ms
18/06/26 10:55:49 INFO JobScheduler: Total delay: 0.076 s for time 1529981749000 ms (execution: 0.053 s)
18/06/26 10:55:49 INFO ShuffledRDD: Removing RDD 4 from persistence list
18/06/26 10:55:49 INFO BlockManager: Removing RDD 4
18/06/26 10:55:49 INFO MapPartitionsRDD: Removing RDD 3 from persistence list
18/06/26 10:55:49 INFO BlockManager: Removing RDD 3
18/06/26 10:55:49 INFO MapPartitionsRDD: Removing RDD 2 from persistence list
18/06/26 10:55:49 INFO BlockManager: Removing RDD 2
18/06/26 10:55:49 INFO BlockRDD: Removing RDD 1 from persistence list
18/06/26 10:55:49 INFO BlockManager: Removing RDD 1
18/06/26 10:55:49 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[1] at socketTextStream at SparkWordCount.scala:46 of time 1529981749000 ms
18/06/26 10:55:49 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
18/06/26 10:55:49 INFO InputInfoTracker: remove old batch metadata: 
18/06/26 10:55:50 INFO JobScheduler: Added jobs for time 1529981750000 ms
18/06/26 10:55:50 INFO JobScheduler: Starting job streaming job 1529981750000 ms.0 from job set of time 1529981750000 ms
18/06/26 10:55:50 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:50 INFO DAGScheduler: Registering RDD 11 (map at SparkWordCount.scala:51)
18/06/26 10:55:50 INFO DAGScheduler: Got job 5 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:50 INFO DAGScheduler: Final stage: ResultStage 10 (print at SparkWordCount.scala:55)
18/06/26 10:55:50 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 9)
18/06/26 10:55:50 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:50 INFO DAGScheduler: Submitting ResultStage 10 (ShuffledRDD[12] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:50 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 2.6 KB, free 56.0 KB)
18/06/26 10:55:50 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 1634.0 B, free 57.6 KB)
18/06/26 10:55:50 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on localhost:55667 (size: 1634.0 B, free: 2.4 GB)
18/06/26 10:55:50 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:50 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 10 (ShuffledRDD[12] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:50 INFO TaskSchedulerImpl: Adding task set 10.0 with 1 tasks
18/06/26 10:55:50 INFO TaskSetManager: Starting task 0.0 in stage 10.0 (TID 5, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:50 INFO Executor: Running task 0.0 in stage 10.0 (TID 5)
18/06/26 10:55:50 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:50 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:50 INFO Executor: Finished task 0.0 in stage 10.0 (TID 5). 1161 bytes result sent to driver
18/06/26 10:55:50 INFO TaskSetManager: Finished task 0.0 in stage 10.0 (TID 5) in 6 ms on localhost (1/1)
18/06/26 10:55:50 INFO DAGScheduler: ResultStage 10 (print at SparkWordCount.scala:55) finished in 0.008 s
18/06/26 10:55:50 INFO TaskSchedulerImpl: Removed TaskSet 10.0, whose tasks have all completed, from pool 
18/06/26 10:55:50 INFO DAGScheduler: Job 5 finished: print at SparkWordCount.scala:55, took 0.017396 s
18/06/26 10:55:50 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:50 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 2 is 82 bytes
18/06/26 10:55:50 INFO DAGScheduler: Got job 6 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:50 INFO DAGScheduler: Final stage: ResultStage 12 (print at SparkWordCount.scala:55)
18/06/26 10:55:50 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 11)
18/06/26 10:55:50 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:50 INFO DAGScheduler: Submitting ResultStage 12 (ShuffledRDD[12] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:50 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 2.6 KB, free 60.2 KB)
18/06/26 10:55:50 INFO MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 1634.0 B, free 61.8 KB)
18/06/26 10:55:50 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on localhost:55667 (size: 1634.0 B, free: 2.4 GB)
18/06/26 10:55:50 INFO SparkContext: Created broadcast 6 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:50 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 12 (ShuffledRDD[12] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:50 INFO TaskSchedulerImpl: Adding task set 12.0 with 1 tasks
18/06/26 10:55:50 INFO TaskSetManager: Starting task 0.0 in stage 12.0 (TID 6, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:50 INFO Executor: Running task 0.0 in stage 12.0 (TID 6)
18/06/26 10:55:50 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:50 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:50 INFO Executor: Finished task 0.0 in stage 12.0 (TID 6). 1161 bytes result sent to driver
18/06/26 10:55:50 INFO TaskSetManager: Finished task 0.0 in stage 12.0 (TID 6) in 4 ms on localhost (1/1)
-------------------------------------------
Time: 1529981750000 ms
-------------------------------------------
18/06/26 10:55:50 INFO TaskSchedulerImpl: Removed TaskSet 12.0, whose tasks have all completed, from pool 
18/06/26 10:55:50 INFO DAGScheduler: ResultStage 12 (print at SparkWordCount.scala:55) finished in 0.005 s
18/06/26 10:55:50 INFO DAGScheduler: Job 6 finished: print at SparkWordCount.scala:55, took 0.009884 s
18/06/26 10:55:50 INFO JobScheduler: Finished job streaming job 1529981750000 ms.0 from job set of time 1529981750000 ms
18/06/26 10:55:50 INFO JobScheduler: Total delay: 0.069 s for time 1529981750000 ms (execution: 0.041 s)
18/06/26 10:55:50 INFO ShuffledRDD: Removing RDD 8 from persistence list
18/06/26 10:55:50 INFO BlockManager: Removing RDD 8
18/06/26 10:55:50 INFO MapPartitionsRDD: Removing RDD 7 from persistence list
18/06/26 10:55:50 INFO BlockManager: Removing RDD 7
18/06/26 10:55:50 INFO MapPartitionsRDD: Removing RDD 6 from persistence list
18/06/26 10:55:50 INFO BlockManager: Removing RDD 6
18/06/26 10:55:50 INFO BlockRDD: Removing RDD 5 from persistence list
18/06/26 10:55:50 INFO BlockManager: Removing RDD 5
18/06/26 10:55:50 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[5] at socketTextStream at SparkWordCount.scala:46 of time 1529981750000 ms
18/06/26 10:55:50 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981748000 ms)
18/06/26 10:55:50 INFO InputInfoTracker: remove old batch metadata: 1529981748000 ms

18/06/26 10:55:50 INFO ReceiverSupervisorImpl: Starting receiver again
18/06/26 10:55:50 INFO ReceiverTracker: Registered receiver for stream 0 from 10.41.88.38:55640
18/06/26 10:55:50 INFO ReceiverSupervisorImpl: Starting receiver
18/06/26 10:55:50 INFO ReceiverSupervisorImpl: Called receiver onStart
18/06/26 10:55:50 INFO SocketReceiver: Connecting to 192.168.56.101:9999
18/06/26 10:55:50 INFO ReceiverSupervisorImpl: Receiver started again
18/06/26 10:55:51 INFO JobScheduler: Added jobs for time 1529981751000 ms
18/06/26 10:55:51 INFO JobScheduler: Starting job streaming job 1529981751000 ms.0 from job set of time 1529981751000 ms
18/06/26 10:55:51 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:51 INFO DAGScheduler: Registering RDD 15 (map at SparkWordCount.scala:51)
18/06/26 10:55:51 INFO DAGScheduler: Got job 7 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:51 INFO DAGScheduler: Final stage: ResultStage 14 (print at SparkWordCount.scala:55)
18/06/26 10:55:51 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 13)
18/06/26 10:55:51 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:51 INFO DAGScheduler: Submitting ResultStage 14 (ShuffledRDD[16] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:51 INFO MemoryStore: Block broadcast_7 stored as values in memory (estimated size 2.6 KB, free 64.4 KB)
18/06/26 10:55:51 INFO MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 1638.0 B, free 66.0 KB)
18/06/26 10:55:51 INFO BlockManagerInfo: Added broadcast_7_piece0 in memory on localhost:55667 (size: 1638.0 B, free: 2.4 GB)
18/06/26 10:55:51 INFO SparkContext: Created broadcast 7 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:51 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 14 (ShuffledRDD[16] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:51 INFO TaskSchedulerImpl: Adding task set 14.0 with 1 tasks
18/06/26 10:55:51 INFO TaskSetManager: Starting task 0.0 in stage 14.0 (TID 7, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:51 INFO Executor: Running task 0.0 in stage 14.0 (TID 7)
18/06/26 10:55:51 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:51 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
18/06/26 10:55:51 INFO Executor: Finished task 0.0 in stage 14.0 (TID 7). 1161 bytes result sent to driver
18/06/26 10:55:51 INFO TaskSetManager: Finished task 0.0 in stage 14.0 (TID 7) in 3 ms on localhost (1/1)
18/06/26 10:55:51 INFO TaskSchedulerImpl: Removed TaskSet 14.0, whose tasks have all completed, from pool 
18/06/26 10:55:51 INFO DAGScheduler: ResultStage 14 (print at SparkWordCount.scala:55) finished in 0.003 s
18/06/26 10:55:51 INFO DAGScheduler: Job 7 finished: print at SparkWordCount.scala:55, took 0.009481 s
18/06/26 10:55:51 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:51 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 3 is 82 bytes
18/06/26 10:55:51 INFO DAGScheduler: Got job 8 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:51 INFO DAGScheduler: Final stage: ResultStage 16 (print at SparkWordCount.scala:55)
18/06/26 10:55:51 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 15)
18/06/26 10:55:51 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:51 INFO DAGScheduler: Submitting ResultStage 16 (ShuffledRDD[16] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:51 INFO MemoryStore: Block broadcast_8 stored as values in memory (estimated size 2.6 KB, free 68.6 KB)
18/06/26 10:55:51 INFO MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 1638.0 B, free 70.2 KB)
18/06/26 10:55:51 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on localhost:55667 (size: 1638.0 B, free: 2.4 GB)
18/06/26 10:55:51 INFO SparkContext: Created broadcast 8 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:51 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 16 (ShuffledRDD[16] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:51 INFO TaskSchedulerImpl: Adding task set 16.0 with 1 tasks
18/06/26 10:55:51 INFO TaskSetManager: Starting task 0.0 in stage 16.0 (TID 8, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:51 INFO Executor: Running task 0.0 in stage 16.0 (TID 8)
18/06/26 10:55:51 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:51 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:51 INFO Executor: Finished task 0.0 in stage 16.0 (TID 8). 1161 bytes result sent to driver
18/06/26 10:55:51 INFO TaskSetManager: Finished task 0.0 in stage 16.0 (TID 8) in 3 ms on localhost (1/1)
18/06/26 10:55:51 INFO TaskSchedulerImpl: Removed TaskSet 16.0, whose tasks have all completed, from pool 
18/06/26 10:55:51 INFO DAGScheduler: ResultStage 16 (print at SparkWordCount.scala:55) finished in 0.004 s
18/06/26 10:55:51 INFO DAGScheduler: Job 8 finished: print at SparkWordCount.scala:55, took 0.011080 s
-------------------------------------------
Time: 1529981751000 ms
-------------------------------------------

18/06/26 10:55:51 INFO JobScheduler: Finished job streaming job 1529981751000 ms.0 from job set of time 1529981751000 ms
18/06/26 10:55:51 INFO JobScheduler: Total delay: 0.042 s for time 1529981751000 ms (execution: 0.026 s)
18/06/26 10:55:51 INFO ShuffledRDD: Removing RDD 12 from persistence list
18/06/26 10:55:51 INFO BlockManager: Removing RDD 12
18/06/26 10:55:51 INFO MapPartitionsRDD: Removing RDD 11 from persistence list
18/06/26 10:55:51 INFO BlockManager: Removing RDD 11
18/06/26 10:55:51 INFO MapPartitionsRDD: Removing RDD 10 from persistence list
18/06/26 10:55:51 INFO BlockManager: Removing RDD 10
18/06/26 10:55:51 INFO BlockRDD: Removing RDD 9 from persistence list
18/06/26 10:55:51 INFO BlockManager: Removing RDD 9
18/06/26 10:55:51 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[9] at socketTextStream at SparkWordCount.scala:46 of time 1529981751000 ms
18/06/26 10:55:51 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981749000 ms)
18/06/26 10:55:51 INFO InputInfoTracker: remove old batch metadata: 1529981749000 ms
18/06/26 10:55:51 WARN ReceiverSupervisorImpl: Restarting receiver with delay 2000 ms: Error connecting to 192.168.56.101:9999
java.net.ConnectException: Connection refused: connect
	at java.net.DualStackPlainSocketImpl.connect0(Native Method)
	at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at java.net.Socket.connect(Socket.java:538)
	at java.net.Socket.&amp;lt;init&amp;gt;(Socket.java:434)
	at java.net.Socket.&amp;lt;init&amp;gt;(Socket.java:211)
	at org.apache.spark.streaming.dstream.SocketReceiver.receive(SocketInputDStream.scala:73)
	at org.apache.spark.streaming.dstream.SocketReceiver$$anon$2.run(SocketInputDStream.scala:59)
18/06/26 10:55:51 INFO ReceiverSupervisorImpl: Stopping receiver with message: Restarting receiver with delay 2000ms: Error connecting to 192.168.56.101:9999: java.net.ConnectException: Connection refused: connect
18/06/26 10:55:51 INFO ReceiverSupervisorImpl: Called receiver onStop
18/06/26 10:55:51 INFO ReceiverSupervisorImpl: Deregistering receiver 0
18/06/26 10:55:51 ERROR ReceiverTracker: Deregistered receiver for stream 0: Restarting receiver with delay 2000ms: Error connecting to 192.168.56.101:9999 - java.net.ConnectException: Connection refused: connect
	at java.net.DualStackPlainSocketImpl.connect0(Native Method)
	at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at java.net.Socket.connect(Socket.java:538)
	at java.net.Socket.&amp;lt;init&amp;gt;(Socket.java:434)
	at java.net.Socket.&amp;lt;init&amp;gt;(Socket.java:211)
	at org.apache.spark.streaming.dstream.SocketReceiver.receive(SocketInputDStream.scala:73)
	at org.apache.spark.streaming.dstream.SocketReceiver$$anon$2.run(SocketInputDStream.scala:59)

18/06/26 10:55:51 INFO ReceiverSupervisorImpl: Stopped receiver 0
18/06/26 10:55:52 INFO JobScheduler: Added jobs for time 1529981752000 ms
18/06/26 10:55:52 INFO JobScheduler: Starting job streaming job 1529981752000 ms.0 from job set of time 1529981752000 ms
18/06/26 10:55:52 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:52 INFO DAGScheduler: Registering RDD 19 (map at SparkWordCount.scala:51)
18/06/26 10:55:52 INFO DAGScheduler: Got job 9 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:52 INFO DAGScheduler: Final stage: ResultStage 18 (print at SparkWordCount.scala:55)
18/06/26 10:55:52 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 17)
18/06/26 10:55:52 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:52 INFO DAGScheduler: Submitting ResultStage 18 (ShuffledRDD[20] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:52 INFO MemoryStore: Block broadcast_9 stored as values in memory (estimated size 2.6 KB, free 72.8 KB)
18/06/26 10:55:52 INFO MemoryStore: Block broadcast_9_piece0 stored as bytes in memory (estimated size 1637.0 B, free 74.4 KB)
18/06/26 10:55:52 INFO BlockManagerInfo: Added broadcast_9_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:52 INFO SparkContext: Created broadcast 9 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:52 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 18 (ShuffledRDD[20] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:52 INFO TaskSchedulerImpl: Adding task set 18.0 with 1 tasks
18/06/26 10:55:52 INFO TaskSetManager: Starting task 0.0 in stage 18.0 (TID 9, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:52 INFO Executor: Running task 0.0 in stage 18.0 (TID 9)
18/06/26 10:55:52 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:52 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:52 INFO Executor: Finished task 0.0 in stage 18.0 (TID 9). 1161 bytes result sent to driver
18/06/26 10:55:52 INFO TaskSetManager: Finished task 0.0 in stage 18.0 (TID 9) in 10 ms on localhost (1/1)
18/06/26 10:55:52 INFO TaskSchedulerImpl: Removed TaskSet 18.0, whose tasks have all completed, from pool 
18/06/26 10:55:52 INFO DAGScheduler: ResultStage 18 (print at SparkWordCount.scala:55) finished in 0.011 s
18/06/26 10:55:52 INFO DAGScheduler: Job 9 finished: print at SparkWordCount.scala:55, took 0.028769 s
18/06/26 10:55:52 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:52 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 4 is 82 bytes
18/06/26 10:55:52 INFO DAGScheduler: Got job 10 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:52 INFO DAGScheduler: Final stage: ResultStage 20 (print at SparkWordCount.scala:55)
18/06/26 10:55:52 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 19)
18/06/26 10:55:52 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:52 INFO DAGScheduler: Submitting ResultStage 20 (ShuffledRDD[20] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:52 INFO MemoryStore: Block broadcast_10 stored as values in memory (estimated size 2.6 KB, free 77.0 KB)
18/06/26 10:55:52 INFO MemoryStore: Block broadcast_10_piece0 stored as bytes in memory (estimated size 1637.0 B, free 78.6 KB)
18/06/26 10:55:52 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:52 INFO SparkContext: Created broadcast 10 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:52 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 20 (ShuffledRDD[20] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:52 INFO TaskSchedulerImpl: Adding task set 20.0 with 1 tasks
18/06/26 10:55:52 INFO TaskSetManager: Starting task 0.0 in stage 20.0 (TID 10, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:52 INFO Executor: Running task 0.0 in stage 20.0 (TID 10)
18/06/26 10:55:52 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:52 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:52 INFO Executor: Finished task 0.0 in stage 20.0 (TID 10). 1161 bytes result sent to driver
18/06/26 10:55:52 INFO TaskSetManager: Finished task 0.0 in stage 20.0 (TID 10) in 6 ms on localhost (1/1)
18/06/26 10:55:52 INFO TaskSchedulerImpl: Removed TaskSet 20.0, whose tasks have all completed, from pool 
18/06/26 10:55:52 INFO DAGScheduler: ResultStage 20 (print at SparkWordCount.scala:55) finished in 0.006 s
18/06/26 10:55:52 INFO DAGScheduler: Job 10 finished: print at SparkWordCount.scala:55, took 0.015599 s
18/06/26 10:55:52 INFO JobScheduler: Finished job streaming job 1529981752000 ms.0 from job set of time 1529981752000 ms
18/06/26 10:55:52 INFO JobScheduler: Total delay: 0.091 s for time 1529981752000 ms (execution: 0.064 s)
18/06/26 10:55:52 INFO ShuffledRDD: Removing RDD 16 from persistence list
18/06/26 10:55:52 INFO MapPartitionsRDD: Removing RDD 15 from persistence list
18/06/26 10:55:52 INFO BlockManager: Removing RDD 16
18/06/26 10:55:52 INFO MapPartitionsRDD: Removing RDD 14 from persistence list
18/06/26 10:55:52 INFO BlockManager: Removing RDD 15
18/06/26 10:55:52 INFO BlockManager: Removing RDD 14
18/06/26 10:55:52 INFO BlockRDD: Removing RDD 13 from persistence list
-------------------------------------------
Time: 1529981752000 ms
-------------------------------------------

18/06/26 10:55:52 INFO BlockManager: Removing RDD 13
18/06/26 10:55:52 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[13] at socketTextStream at SparkWordCount.scala:46 of time 1529981752000 ms
18/06/26 10:55:52 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981750000 ms)
18/06/26 10:55:52 INFO InputInfoTracker: remove old batch metadata: 1529981750000 ms
18/06/26 10:55:53 INFO JobScheduler: Added jobs for time 1529981753000 ms
18/06/26 10:55:53 INFO JobScheduler: Starting job streaming job 1529981753000 ms.0 from job set of time 1529981753000 ms
18/06/26 10:55:53 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:53 INFO DAGScheduler: Registering RDD 23 (map at SparkWordCount.scala:51)
18/06/26 10:55:53 INFO DAGScheduler: Got job 11 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:53 INFO DAGScheduler: Final stage: ResultStage 22 (print at SparkWordCount.scala:55)
18/06/26 10:55:53 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 21)
18/06/26 10:55:53 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:53 INFO DAGScheduler: Submitting ResultStage 22 (ShuffledRDD[24] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:53 INFO MemoryStore: Block broadcast_11 stored as values in memory (estimated size 2.6 KB, free 81.2 KB)
18/06/26 10:55:53 INFO MemoryStore: Block broadcast_11_piece0 stored as bytes in memory (estimated size 1636.0 B, free 82.8 KB)
18/06/26 10:55:53 INFO BlockManagerInfo: Added broadcast_11_piece0 in memory on localhost:55667 (size: 1636.0 B, free: 2.4 GB)
18/06/26 10:55:53 INFO SparkContext: Created broadcast 11 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:53 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 22 (ShuffledRDD[24] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:53 INFO TaskSchedulerImpl: Adding task set 22.0 with 1 tasks
18/06/26 10:55:53 INFO TaskSetManager: Starting task 0.0 in stage 22.0 (TID 11, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:53 INFO Executor: Running task 0.0 in stage 22.0 (TID 11)
18/06/26 10:55:53 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:53 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
18/06/26 10:55:53 INFO Executor: Finished task 0.0 in stage 22.0 (TID 11). 1161 bytes result sent to driver
18/06/26 10:55:53 INFO TaskSetManager: Finished task 0.0 in stage 22.0 (TID 11) in 7 ms on localhost (1/1)
18/06/26 10:55:53 INFO TaskSchedulerImpl: Removed TaskSet 22.0, whose tasks have all completed, from pool 
18/06/26 10:55:53 INFO DAGScheduler: ResultStage 22 (print at SparkWordCount.scala:55) finished in 0.008 s
18/06/26 10:55:53 INFO DAGScheduler: Job 11 finished: print at SparkWordCount.scala:55, took 0.024801 s
18/06/26 10:55:53 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:53 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 5 is 82 bytes
18/06/26 10:55:53 INFO DAGScheduler: Got job 12 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:53 INFO DAGScheduler: Final stage: ResultStage 24 (print at SparkWordCount.scala:55)
18/06/26 10:55:53 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 23)
18/06/26 10:55:53 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:53 INFO DAGScheduler: Submitting ResultStage 24 (ShuffledRDD[24] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:53 INFO MemoryStore: Block broadcast_12 stored as values in memory (estimated size 2.6 KB, free 85.4 KB)
18/06/26 10:55:53 INFO MemoryStore: Block broadcast_12_piece0 stored as bytes in memory (estimated size 1636.0 B, free 87.0 KB)
18/06/26 10:55:53 INFO BlockManagerInfo: Added broadcast_12_piece0 in memory on localhost:55667 (size: 1636.0 B, free: 2.4 GB)
18/06/26 10:55:53 INFO SparkContext: Created broadcast 12 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:53 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 24 (ShuffledRDD[24] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:53 INFO TaskSchedulerImpl: Adding task set 24.0 with 1 tasks
18/06/26 10:55:53 INFO TaskSetManager: Starting task 0.0 in stage 24.0 (TID 12, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:53 INFO Executor: Running task 0.0 in stage 24.0 (TID 12)
18/06/26 10:55:53 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:53 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:53 INFO Executor: Finished task 0.0 in stage 24.0 (TID 12). 1161 bytes result sent to driver
18/06/26 10:55:53 INFO TaskSetManager: Finished task 0.0 in stage 24.0 (TID 12) in 4 ms on localhost (1/1)
18/06/26 10:55:53 INFO TaskSchedulerImpl: Removed TaskSet 24.0, whose tasks have all completed, from pool 
18/06/26 10:55:53 INFO DAGScheduler: ResultStage 24 (print at SparkWordCount.scala:55) finished in 0.004 s
18/06/26 10:55:53 INFO DAGScheduler: Job 12 finished: print at SparkWordCount.scala:55, took 0.013128 s
-------------------------------------------
Time: 1529981753000 ms
-------------------------------------------

18/06/26 10:55:53 INFO JobScheduler: Finished job streaming job 1529981753000 ms.0 from job set of time 1529981753000 ms
18/06/26 10:55:53 INFO JobScheduler: Total delay: 0.080 s for time 1529981753000 ms (execution: 0.054 s)
18/06/26 10:55:53 INFO ShuffledRDD: Removing RDD 20 from persistence list
18/06/26 10:55:53 INFO BlockManager: Removing RDD 20
18/06/26 10:55:53 INFO MapPartitionsRDD: Removing RDD 19 from persistence list
18/06/26 10:55:53 INFO MapPartitionsRDD: Removing RDD 18 from persistence list
18/06/26 10:55:53 INFO BlockManager: Removing RDD 19
18/06/26 10:55:53 INFO BlockRDD: Removing RDD 17 from persistence list
18/06/26 10:55:53 INFO BlockManager: Removing RDD 18
18/06/26 10:55:53 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[17] at socketTextStream at SparkWordCount.scala:46 of time 1529981753000 ms
18/06/26 10:55:53 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981751000 ms)
18/06/26 10:55:53 INFO InputInfoTracker: remove old batch metadata: 1529981751000 ms
18/06/26 10:55:53 INFO BlockManager: Removing RDD 17
18/06/26 10:55:53 INFO ReceiverSupervisorImpl: Starting receiver again
18/06/26 10:55:53 INFO ReceiverTracker: Registered receiver for stream 0 from 10.41.88.38:55640
18/06/26 10:55:53 INFO ReceiverSupervisorImpl: Starting receiver
18/06/26 10:55:53 INFO ReceiverSupervisorImpl: Called receiver onStart
18/06/26 10:55:53 INFO ReceiverSupervisorImpl: Receiver started again
18/06/26 10:55:53 INFO SocketReceiver: Connecting to 192.168.56.101:9999
18/06/26 10:55:53 INFO SocketReceiver: Connected to 192.168.56.101:9999
18/06/26 10:55:54 INFO JobScheduler: Added jobs for time 1529981754000 ms
18/06/26 10:55:54 INFO JobScheduler: Starting job streaming job 1529981754000 ms.0 from job set of time 1529981754000 ms
18/06/26 10:55:54 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:54 INFO DAGScheduler: Registering RDD 27 (map at SparkWordCount.scala:51)
18/06/26 10:55:54 INFO DAGScheduler: Got job 13 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:54 INFO DAGScheduler: Final stage: ResultStage 26 (print at SparkWordCount.scala:55)
18/06/26 10:55:54 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 25)
18/06/26 10:55:54 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:54 INFO DAGScheduler: Submitting ResultStage 26 (ShuffledRDD[28] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:54 INFO MemoryStore: Block broadcast_13 stored as values in memory (estimated size 2.6 KB, free 89.6 KB)
18/06/26 10:55:54 INFO MemoryStore: Block broadcast_13_piece0 stored as bytes in memory (estimated size 1637.0 B, free 91.2 KB)
18/06/26 10:55:54 INFO BlockManagerInfo: Added broadcast_13_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:54 INFO SparkContext: Created broadcast 13 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:54 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 26 (ShuffledRDD[28] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:54 INFO TaskSchedulerImpl: Adding task set 26.0 with 1 tasks
18/06/26 10:55:54 INFO TaskSetManager: Starting task 0.0 in stage 26.0 (TID 13, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:54 INFO Executor: Running task 0.0 in stage 26.0 (TID 13)
18/06/26 10:55:54 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:54 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:54 INFO Executor: Finished task 0.0 in stage 26.0 (TID 13). 1161 bytes result sent to driver
18/06/26 10:55:54 INFO TaskSetManager: Finished task 0.0 in stage 26.0 (TID 13) in 2 ms on localhost (1/1)
18/06/26 10:55:54 INFO TaskSchedulerImpl: Removed TaskSet 26.0, whose tasks have all completed, from pool 
18/06/26 10:55:54 INFO DAGScheduler: ResultStage 26 (print at SparkWordCount.scala:55) finished in 0.003 s
18/06/26 10:55:54 INFO DAGScheduler: Job 13 finished: print at SparkWordCount.scala:55, took 0.010641 s
18/06/26 10:55:54 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:54 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 6 is 82 bytes
18/06/26 10:55:54 INFO DAGScheduler: Got job 14 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:54 INFO DAGScheduler: Final stage: ResultStage 28 (print at SparkWordCount.scala:55)
18/06/26 10:55:54 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 27)
18/06/26 10:55:54 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:54 INFO DAGScheduler: Submitting ResultStage 28 (ShuffledRDD[28] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:54 INFO MemoryStore: Block broadcast_14 stored as values in memory (estimated size 2.6 KB, free 93.8 KB)
18/06/26 10:55:54 INFO MemoryStore: Block broadcast_14_piece0 stored as bytes in memory (estimated size 1637.0 B, free 95.4 KB)
18/06/26 10:55:54 INFO BlockManagerInfo: Added broadcast_14_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:54 INFO SparkContext: Created broadcast 14 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:54 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 28 (ShuffledRDD[28] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:54 INFO TaskSchedulerImpl: Adding task set 28.0 with 1 tasks
18/06/26 10:55:54 INFO TaskSetManager: Starting task 0.0 in stage 28.0 (TID 14, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:54 INFO Executor: Running task 0.0 in stage 28.0 (TID 14)
18/06/26 10:55:54 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:54 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:54 INFO Executor: Finished task 0.0 in stage 28.0 (TID 14). 1161 bytes result sent to driver
18/06/26 10:55:54 INFO TaskSetManager: Finished task 0.0 in stage 28.0 (TID 14) in 5 ms on localhost (1/1)
18/06/26 10:55:54 INFO TaskSchedulerImpl: Removed TaskSet 28.0, whose tasks have all completed, from pool 
18/06/26 10:55:54 INFO DAGScheduler: ResultStage 28 (print at SparkWordCount.scala:55) finished in 0.005 s
18/06/26 10:55:54 INFO DAGScheduler: Job 14 finished: print at SparkWordCount.scala:55, took 0.010471 s
18/06/26 10:55:54 INFO JobScheduler: Finished job streaming job 1529981754000 ms.0 from job set of time 1529981754000 ms
18/06/26 10:55:54 INFO JobScheduler: Total delay: 0.042 s for time 1529981754000 ms (execution: 0.032 s)
-------------------------------------------
Time: 1529981754000 ms
-------------------------------------------

18/06/26 10:55:54 INFO ShuffledRDD: Removing RDD 24 from persistence list
18/06/26 10:55:54 INFO BlockManager: Removing RDD 24
18/06/26 10:55:54 INFO MapPartitionsRDD: Removing RDD 23 from persistence list
18/06/26 10:55:54 INFO BlockManager: Removing RDD 23
18/06/26 10:55:54 INFO MapPartitionsRDD: Removing RDD 22 from persistence list
18/06/26 10:55:54 INFO BlockRDD: Removing RDD 21 from persistence list
18/06/26 10:55:54 INFO BlockManager: Removing RDD 22
18/06/26 10:55:54 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[21] at socketTextStream at SparkWordCount.scala:46 of time 1529981754000 ms
18/06/26 10:55:54 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981752000 ms)
18/06/26 10:55:54 INFO InputInfoTracker: remove old batch metadata: 1529981752000 ms
18/06/26 10:55:54 INFO BlockManager: Removing RDD 21
18/06/26 10:55:55 INFO JobScheduler: Added jobs for time 1529981755000 ms
18/06/26 10:55:55 INFO JobScheduler: Starting job streaming job 1529981755000 ms.0 from job set of time 1529981755000 ms
18/06/26 10:55:55 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:55 INFO DAGScheduler: Registering RDD 31 (map at SparkWordCount.scala:51)
18/06/26 10:55:55 INFO DAGScheduler: Got job 15 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:55 INFO DAGScheduler: Final stage: ResultStage 30 (print at SparkWordCount.scala:55)
18/06/26 10:55:55 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 29)
18/06/26 10:55:55 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:55 INFO DAGScheduler: Submitting ResultStage 30 (ShuffledRDD[32] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:55 INFO MemoryStore: Block broadcast_15 stored as values in memory (estimated size 2.6 KB, free 98.0 KB)
18/06/26 10:55:55 INFO MemoryStore: Block broadcast_15_piece0 stored as bytes in memory (estimated size 1637.0 B, free 99.6 KB)
18/06/26 10:55:55 INFO BlockManagerInfo: Added broadcast_15_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:55 INFO SparkContext: Created broadcast 15 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:55 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 30 (ShuffledRDD[32] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:55 INFO TaskSchedulerImpl: Adding task set 30.0 with 1 tasks
18/06/26 10:55:55 INFO TaskSetManager: Starting task 0.0 in stage 30.0 (TID 15, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:55 INFO Executor: Running task 0.0 in stage 30.0 (TID 15)
18/06/26 10:55:55 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:55 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:55 INFO Executor: Finished task 0.0 in stage 30.0 (TID 15). 1161 bytes result sent to driver
18/06/26 10:55:55 INFO TaskSetManager: Finished task 0.0 in stage 30.0 (TID 15) in 5 ms on localhost (1/1)
18/06/26 10:55:55 INFO TaskSchedulerImpl: Removed TaskSet 30.0, whose tasks have all completed, from pool 
18/06/26 10:55:55 INFO DAGScheduler: ResultStage 30 (print at SparkWordCount.scala:55) finished in 0.005 s
18/06/26 10:55:55 INFO DAGScheduler: Job 15 finished: print at SparkWordCount.scala:55, took 0.012644 s
18/06/26 10:55:55 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:55 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 7 is 82 bytes
18/06/26 10:55:55 INFO DAGScheduler: Got job 16 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:55 INFO DAGScheduler: Final stage: ResultStage 32 (print at SparkWordCount.scala:55)
18/06/26 10:55:55 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 31)
18/06/26 10:55:55 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:55 INFO DAGScheduler: Submitting ResultStage 32 (ShuffledRDD[32] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:55 INFO MemoryStore: Block broadcast_16 stored as values in memory (estimated size 2.6 KB, free 102.2 KB)
18/06/26 10:55:55 INFO MemoryStore: Block broadcast_16_piece0 stored as bytes in memory (estimated size 1637.0 B, free 103.8 KB)
18/06/26 10:55:55 INFO BlockManagerInfo: Added broadcast_16_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:55 INFO SparkContext: Created broadcast 16 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:55 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 32 (ShuffledRDD[32] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:55 INFO TaskSchedulerImpl: Adding task set 32.0 with 1 tasks
18/06/26 10:55:55 INFO TaskSetManager: Starting task 0.0 in stage 32.0 (TID 16, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:55 INFO Executor: Running task 0.0 in stage 32.0 (TID 16)
18/06/26 10:55:55 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:55 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:55 INFO Executor: Finished task 0.0 in stage 32.0 (TID 16). 1161 bytes result sent to driver
18/06/26 10:55:55 INFO TaskSetManager: Finished task 0.0 in stage 32.0 (TID 16) in 5 ms on localhost (1/1)
18/06/26 10:55:55 INFO TaskSchedulerImpl: Removed TaskSet 32.0, whose tasks have all completed, from pool 
18/06/26 10:55:55 INFO DAGScheduler: ResultStage 32 (print at SparkWordCount.scala:55) finished in 0.005 s
18/06/26 10:55:55 INFO DAGScheduler: Job 16 finished: print at SparkWordCount.scala:55, took 0.012046 s
-------------------------------------------
Time: 1529981755000 ms
-------------------------------------------

18/06/26 10:55:55 INFO JobScheduler: Finished job streaming job 1529981755000 ms.0 from job set of time 1529981755000 ms
18/06/26 10:55:55 INFO JobScheduler: Total delay: 0.068 s for time 1529981755000 ms (execution: 0.044 s)
18/06/26 10:55:55 INFO ShuffledRDD: Removing RDD 28 from persistence list
18/06/26 10:55:55 INFO BlockManager: Removing RDD 28
18/06/26 10:55:55 INFO MapPartitionsRDD: Removing RDD 27 from persistence list
18/06/26 10:55:55 INFO BlockManager: Removing RDD 27
18/06/26 10:55:55 INFO MapPartitionsRDD: Removing RDD 26 from persistence list
18/06/26 10:55:55 INFO BlockManager: Removing RDD 26
18/06/26 10:55:55 INFO BlockRDD: Removing RDD 25 from persistence list
18/06/26 10:55:55 INFO BlockManager: Removing RDD 25
18/06/26 10:55:55 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[25] at socketTextStream at SparkWordCount.scala:46 of time 1529981755000 ms
18/06/26 10:55:55 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981753000 ms)
18/06/26 10:55:55 INFO InputInfoTracker: remove old batch metadata: 1529981753000 ms
18/06/26 10:55:56 INFO MemoryStore: Block input-0-1529981755800 stored as bytes in memory (estimated size 18.0 B, free 103.9 KB)
18/06/26 10:55:56 INFO BlockManagerInfo: Added input-0-1529981755800 in memory on localhost:55667 (size: 18.0 B, free: 2.4 GB)
18/06/26 10:55:56 WARN BlockManager: Block input-0-1529981755800 replicated to only 0 peer(s) instead of 1 peers
18/06/26 10:55:56 INFO JobScheduler: Added jobs for time 1529981756000 ms
18/06/26 10:55:56 INFO JobScheduler: Starting job streaming job 1529981756000 ms.0 from job set of time 1529981756000 ms
18/06/26 10:55:56 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:56 INFO DAGScheduler: Registering RDD 35 (map at SparkWordCount.scala:51)
18/06/26 10:55:56 INFO BlockGenerator: Pushed block input-0-1529981755800
18/06/26 10:55:56 INFO DAGScheduler: Got job 17 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:56 INFO DAGScheduler: Final stage: ResultStage 34 (print at SparkWordCount.scala:55)
18/06/26 10:55:56 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 33)
18/06/26 10:55:56 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:56 INFO DAGScheduler: Submitting ResultStage 34 (ShuffledRDD[36] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:56 INFO MemoryStore: Block broadcast_17 stored as values in memory (estimated size 2.6 KB, free 106.5 KB)
18/06/26 10:55:56 INFO MemoryStore: Block broadcast_17_piece0 stored as bytes in memory (estimated size 1637.0 B, free 108.1 KB)
18/06/26 10:55:56 INFO BlockManagerInfo: Added broadcast_17_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:56 INFO SparkContext: Created broadcast 17 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:56 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 34 (ShuffledRDD[36] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:56 INFO TaskSchedulerImpl: Adding task set 34.0 with 1 tasks
18/06/26 10:55:56 INFO TaskSetManager: Starting task 0.0 in stage 34.0 (TID 17, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:56 INFO Executor: Running task 0.0 in stage 34.0 (TID 17)
18/06/26 10:55:56 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:56 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:56 INFO Executor: Finished task 0.0 in stage 34.0 (TID 17). 1161 bytes result sent to driver
18/06/26 10:55:56 INFO TaskSetManager: Finished task 0.0 in stage 34.0 (TID 17) in 4 ms on localhost (1/1)
18/06/26 10:55:56 INFO TaskSchedulerImpl: Removed TaskSet 34.0, whose tasks have all completed, from pool 
18/06/26 10:55:56 INFO DAGScheduler: ResultStage 34 (print at SparkWordCount.scala:55) finished in 0.005 s
18/06/26 10:55:56 INFO DAGScheduler: Job 17 finished: print at SparkWordCount.scala:55, took 0.013957 s
18/06/26 10:55:56 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:56 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 8 is 82 bytes
18/06/26 10:55:56 INFO DAGScheduler: Got job 18 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:56 INFO DAGScheduler: Final stage: ResultStage 36 (print at SparkWordCount.scala:55)
18/06/26 10:55:56 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 35)
18/06/26 10:55:56 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:56 INFO DAGScheduler: Submitting ResultStage 36 (ShuffledRDD[36] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:56 INFO MemoryStore: Block broadcast_18 stored as values in memory (estimated size 2.6 KB, free 110.7 KB)
18/06/26 10:55:56 INFO MemoryStore: Block broadcast_18_piece0 stored as bytes in memory (estimated size 1637.0 B, free 112.3 KB)
18/06/26 10:55:56 INFO BlockManagerInfo: Added broadcast_18_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:56 INFO SparkContext: Created broadcast 18 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:56 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 36 (ShuffledRDD[36] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:56 INFO TaskSchedulerImpl: Adding task set 36.0 with 1 tasks
18/06/26 10:55:56 INFO TaskSetManager: Starting task 0.0 in stage 36.0 (TID 18, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:56 INFO Executor: Running task 0.0 in stage 36.0 (TID 18)
18/06/26 10:55:56 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:55:56 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:56 INFO Executor: Finished task 0.0 in stage 36.0 (TID 18). 1161 bytes result sent to driver
18/06/26 10:55:56 INFO TaskSetManager: Finished task 0.0 in stage 36.0 (TID 18) in 1 ms on localhost (1/1)
18/06/26 10:55:56 INFO TaskSchedulerImpl: Removed TaskSet 36.0, whose tasks have all completed, from pool 
18/06/26 10:55:56 INFO DAGScheduler: ResultStage 36 (print at SparkWordCount.scala:55) finished in 0.002 s
18/06/26 10:55:56 INFO DAGScheduler: Job 18 finished: print at SparkWordCount.scala:55, took 0.005934 s
-------------------------------------------
Time: 1529981756000 ms
-------------------------------------------

18/06/26 10:55:56 INFO JobScheduler: Finished job streaming job 1529981756000 ms.0 from job set of time 1529981756000 ms
18/06/26 10:55:56 INFO JobScheduler: Total delay: 0.062 s for time 1529981756000 ms (execution: 0.032 s)
18/06/26 10:55:56 INFO ShuffledRDD: Removing RDD 32 from persistence list
18/06/26 10:55:56 INFO BlockManager: Removing RDD 32
18/06/26 10:55:56 INFO MapPartitionsRDD: Removing RDD 31 from persistence list
18/06/26 10:55:56 INFO BlockManager: Removing RDD 31
18/06/26 10:55:56 INFO MapPartitionsRDD: Removing RDD 30 from persistence list
18/06/26 10:55:56 INFO BlockManager: Removing RDD 30
18/06/26 10:55:56 INFO BlockRDD: Removing RDD 29 from persistence list
18/06/26 10:55:56 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[29] at socketTextStream at SparkWordCount.scala:46 of time 1529981756000 ms
18/06/26 10:55:56 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981754000 ms)
18/06/26 10:55:56 INFO InputInfoTracker: remove old batch metadata: 1529981754000 ms
18/06/26 10:55:56 INFO BlockManager: Removing RDD 29
18/06/26 10:55:57 INFO JobScheduler: Added jobs for time 1529981757000 ms
18/06/26 10:55:57 INFO JobScheduler: Starting job streaming job 1529981757000 ms.0 from job set of time 1529981757000 ms
18/06/26 10:55:57 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:57 INFO DAGScheduler: Registering RDD 39 (map at SparkWordCount.scala:51)
18/06/26 10:55:57 INFO DAGScheduler: Got job 19 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:57 INFO DAGScheduler: Final stage: ResultStage 38 (print at SparkWordCount.scala:55)
18/06/26 10:55:57 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 37)
18/06/26 10:55:57 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 37)
18/06/26 10:55:57 INFO DAGScheduler: Submitting ShuffleMapStage 37 (MapPartitionsRDD[39] at map at SparkWordCount.scala:51), which has no missing parents
18/06/26 10:55:57 INFO MemoryStore: Block broadcast_19 stored as values in memory (estimated size 2.7 KB, free 114.9 KB)
18/06/26 10:55:57 INFO MemoryStore: Block broadcast_19_piece0 stored as bytes in memory (estimated size 1643.0 B, free 116.5 KB)
18/06/26 10:55:57 INFO BlockManagerInfo: Added broadcast_19_piece0 in memory on localhost:55667 (size: 1643.0 B, free: 2.4 GB)
18/06/26 10:55:57 INFO SparkContext: Created broadcast 19 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:57 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 37 (MapPartitionsRDD[39] at map at SparkWordCount.scala:51)
18/06/26 10:55:57 INFO TaskSchedulerImpl: Adding task set 37.0 with 1 tasks
18/06/26 10:55:57 INFO TaskSetManager: Starting task 0.0 in stage 37.0 (TID 19, localhost, partition 0,NODE_LOCAL, 2006 bytes)
18/06/26 10:55:57 INFO Executor: Running task 0.0 in stage 37.0 (TID 19)
18/06/26 10:55:57 INFO BlockManager: Found block input-0-1529981755800 locally
18/06/26 10:55:57 INFO Executor: Finished task 0.0 in stage 37.0 (TID 19). 1159 bytes result sent to driver
18/06/26 10:55:57 INFO TaskSetManager: Finished task 0.0 in stage 37.0 (TID 19) in 31 ms on localhost (1/1)
18/06/26 10:55:57 INFO TaskSchedulerImpl: Removed TaskSet 37.0, whose tasks have all completed, from pool 
18/06/26 10:55:57 INFO DAGScheduler: ShuffleMapStage 37 (map at SparkWordCount.scala:51) finished in 0.033 s
18/06/26 10:55:57 INFO DAGScheduler: looking for newly runnable stages
18/06/26 10:55:57 INFO DAGScheduler: running: Set(ResultStage 0)
18/06/26 10:55:57 INFO DAGScheduler: waiting: Set(ResultStage 38)
18/06/26 10:55:57 INFO DAGScheduler: failed: Set()
18/06/26 10:55:57 INFO DAGScheduler: Submitting ResultStage 38 (ShuffledRDD[40] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:57 INFO MemoryStore: Block broadcast_20 stored as values in memory (estimated size 2.6 KB, free 119.1 KB)
18/06/26 10:55:57 INFO MemoryStore: Block broadcast_20_piece0 stored as bytes in memory (estimated size 1633.0 B, free 120.7 KB)
18/06/26 10:55:57 INFO BlockManagerInfo: Added broadcast_20_piece0 in memory on localhost:55667 (size: 1633.0 B, free: 2.4 GB)
18/06/26 10:55:57 INFO SparkContext: Created broadcast 20 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:57 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 38 (ShuffledRDD[40] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:57 INFO TaskSchedulerImpl: Adding task set 38.0 with 1 tasks
18/06/26 10:55:57 INFO TaskSetManager: Starting task 0.0 in stage 38.0 (TID 20, localhost, partition 0,NODE_LOCAL, 1894 bytes)
18/06/26 10:55:57 INFO Executor: Running task 0.0 in stage 38.0 (TID 20)
18/06/26 10:55:57 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
18/06/26 10:55:57 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
18/06/26 10:55:57 INFO Executor: Finished task 0.0 in stage 38.0 (TID 20). 1330 bytes result sent to driver
18/06/26 10:55:57 INFO TaskSetManager: Finished task 0.0 in stage 38.0 (TID 20) in 8 ms on localhost (1/1)
18/06/26 10:55:57 INFO TaskSchedulerImpl: Removed TaskSet 38.0, whose tasks have all completed, from pool 
18/06/26 10:55:57 INFO DAGScheduler: ResultStage 38 (print at SparkWordCount.scala:55) finished in 0.009 s
18/06/26 10:55:57 INFO DAGScheduler: Job 19 finished: print at SparkWordCount.scala:55, took 0.071868 s
18/06/26 10:55:57 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:57 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 9 is 145 bytes
18/06/26 10:55:57 INFO DAGScheduler: Got job 20 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:57 INFO DAGScheduler: Final stage: ResultStage 40 (print at SparkWordCount.scala:55)
18/06/26 10:55:57 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 39)
18/06/26 10:55:57 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:57 INFO DAGScheduler: Submitting ResultStage 40 (ShuffledRDD[40] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:57 INFO MemoryStore: Block broadcast_21 stored as values in memory (estimated size 2.6 KB, free 123.3 KB)
18/06/26 10:55:57 INFO MemoryStore: Block broadcast_21_piece0 stored as bytes in memory (estimated size 1631.0 B, free 124.9 KB)
18/06/26 10:55:57 INFO BlockManagerInfo: Added broadcast_21_piece0 in memory on localhost:55667 (size: 1631.0 B, free: 2.4 GB)
18/06/26 10:55:57 INFO SparkContext: Created broadcast 21 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:57 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 40 (ShuffledRDD[40] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:57 INFO TaskSchedulerImpl: Adding task set 40.0 with 1 tasks
18/06/26 10:55:57 INFO TaskSetManager: Starting task 0.0 in stage 40.0 (TID 21, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:57 INFO Executor: Running task 0.0 in stage 40.0 (TID 21)
18/06/26 10:55:57 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 1 blocks
18/06/26 10:55:57 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:57 INFO Executor: Finished task 0.0 in stage 40.0 (TID 21). 1161 bytes result sent to driver
18/06/26 10:55:57 INFO TaskSetManager: Finished task 0.0 in stage 40.0 (TID 21) in 2 ms on localhost (1/1)
18/06/26 10:55:57 INFO TaskSchedulerImpl: Removed TaskSet 40.0, whose tasks have all completed, from pool 
18/06/26 10:55:57 INFO DAGScheduler: ResultStage 40 (print at SparkWordCount.scala:55) finished in 0.002 s
18/06/26 10:55:57 INFO DAGScheduler: Job 20 finished: print at SparkWordCount.scala:55, took 0.007949 s
-------------------------------------------
Time: 1529981757000 ms
-------------------------------------------
18/06/26 10:55:57 INFO JobScheduler: Finished job streaming job 1529981757000 ms.0 from job set of time 1529981757000 ms
18/06/26 10:55:57 INFO JobScheduler: Total delay: 0.127 s for time 1529981757000 ms (execution: 0.096 s)
18/06/26 10:55:57 INFO ShuffledRDD: Removing RDD 36 from persistence list
(hello,1)
(world,1)

18/06/26 10:55:57 INFO BlockManager: Removing RDD 36
18/06/26 10:55:57 INFO MapPartitionsRDD: Removing RDD 35 from persistence list
18/06/26 10:55:57 INFO BlockManager: Removing RDD 35
18/06/26 10:55:57 INFO MapPartitionsRDD: Removing RDD 34 from persistence list
18/06/26 10:55:57 INFO BlockManager: Removing RDD 34
18/06/26 10:55:57 INFO BlockRDD: Removing RDD 33 from persistence list
18/06/26 10:55:57 INFO BlockManager: Removing RDD 33
18/06/26 10:55:57 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[33] at socketTextStream at SparkWordCount.scala:46 of time 1529981757000 ms
18/06/26 10:55:57 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981755000 ms)
18/06/26 10:55:57 INFO InputInfoTracker: remove old batch metadata: 1529981755000 ms
18/06/26 10:55:57 INFO MemoryStore: Block input-0-1529981757000 stored as bytes in memory (estimated size 18.0 B, free 124.9 KB)
18/06/26 10:55:57 INFO BlockManagerInfo: Added input-0-1529981757000 in memory on localhost:55667 (size: 18.0 B, free: 2.4 GB)
18/06/26 10:55:57 WARN BlockManager: Block input-0-1529981757000 replicated to only 0 peer(s) instead of 1 peers
18/06/26 10:55:57 INFO BlockGenerator: Pushed block input-0-1529981757000
18/06/26 10:55:58 INFO JobScheduler: Added jobs for time 1529981758000 ms
18/06/26 10:55:58 INFO JobScheduler: Starting job streaming job 1529981758000 ms.0 from job set of time 1529981758000 ms
18/06/26 10:55:58 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:58 INFO DAGScheduler: Registering RDD 43 (map at SparkWordCount.scala:51)
18/06/26 10:55:58 INFO DAGScheduler: Got job 21 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:58 INFO DAGScheduler: Final stage: ResultStage 42 (print at SparkWordCount.scala:55)
18/06/26 10:55:58 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 41)
18/06/26 10:55:58 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 41)
18/06/26 10:55:58 INFO DAGScheduler: Submitting ShuffleMapStage 41 (MapPartitionsRDD[43] at map at SparkWordCount.scala:51), which has no missing parents
18/06/26 10:55:58 INFO MemoryStore: Block broadcast_22 stored as values in memory (estimated size 2.7 KB, free 127.6 KB)
18/06/26 10:55:58 INFO MemoryStore: Block broadcast_22_piece0 stored as bytes in memory (estimated size 1643.0 B, free 129.2 KB)
18/06/26 10:55:58 INFO BlockManagerInfo: Added broadcast_22_piece0 in memory on localhost:55667 (size: 1643.0 B, free: 2.4 GB)
18/06/26 10:55:58 INFO SparkContext: Created broadcast 22 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:58 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 41 (MapPartitionsRDD[43] at map at SparkWordCount.scala:51)
18/06/26 10:55:58 INFO TaskSchedulerImpl: Adding task set 41.0 with 1 tasks
18/06/26 10:55:58 INFO TaskSetManager: Starting task 0.0 in stage 41.0 (TID 22, localhost, partition 0,NODE_LOCAL, 2006 bytes)
18/06/26 10:55:58 INFO Executor: Running task 0.0 in stage 41.0 (TID 22)
18/06/26 10:55:58 INFO BlockManager: Found block input-0-1529981757000 locally
18/06/26 10:55:58 INFO Executor: Finished task 0.0 in stage 41.0 (TID 22). 1159 bytes result sent to driver
18/06/26 10:55:58 INFO TaskSetManager: Finished task 0.0 in stage 41.0 (TID 22) in 10 ms on localhost (1/1)
18/06/26 10:55:58 INFO TaskSchedulerImpl: Removed TaskSet 41.0, whose tasks have all completed, from pool 
18/06/26 10:55:58 INFO DAGScheduler: ShuffleMapStage 41 (map at SparkWordCount.scala:51) finished in 0.010 s
18/06/26 10:55:58 INFO DAGScheduler: looking for newly runnable stages
18/06/26 10:55:58 INFO DAGScheduler: running: Set(ResultStage 0)
18/06/26 10:55:58 INFO DAGScheduler: waiting: Set(ResultStage 42)
18/06/26 10:55:58 INFO DAGScheduler: failed: Set()
18/06/26 10:55:58 INFO DAGScheduler: Submitting ResultStage 42 (ShuffledRDD[44] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:58 INFO MemoryStore: Block broadcast_23 stored as values in memory (estimated size 2.6 KB, free 131.8 KB)
18/06/26 10:55:58 INFO MemoryStore: Block broadcast_23_piece0 stored as bytes in memory (estimated size 1633.0 B, free 133.4 KB)
18/06/26 10:55:58 INFO BlockManagerInfo: Added broadcast_23_piece0 in memory on localhost:55667 (size: 1633.0 B, free: 2.4 GB)
18/06/26 10:55:58 INFO SparkContext: Created broadcast 23 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:58 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 42 (ShuffledRDD[44] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:58 INFO TaskSchedulerImpl: Adding task set 42.0 with 1 tasks
18/06/26 10:55:58 INFO TaskSetManager: Starting task 0.0 in stage 42.0 (TID 23, localhost, partition 0,NODE_LOCAL, 1894 bytes)
18/06/26 10:55:58 INFO Executor: Running task 0.0 in stage 42.0 (TID 23)
18/06/26 10:55:58 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
18/06/26 10:55:58 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:58 INFO Executor: Finished task 0.0 in stage 42.0 (TID 23). 1330 bytes result sent to driver
18/06/26 10:55:58 INFO TaskSetManager: Finished task 0.0 in stage 42.0 (TID 23) in 4 ms on localhost (1/1)
18/06/26 10:55:58 INFO TaskSchedulerImpl: Removed TaskSet 42.0, whose tasks have all completed, from pool 
18/06/26 10:55:58 INFO DAGScheduler: ResultStage 42 (print at SparkWordCount.scala:55) finished in 0.004 s
18/06/26 10:55:58 INFO DAGScheduler: Job 21 finished: print at SparkWordCount.scala:55, took 0.028594 s
18/06/26 10:55:58 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:58 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 10 is 145 bytes
18/06/26 10:55:58 INFO DAGScheduler: Got job 22 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:58 INFO DAGScheduler: Final stage: ResultStage 44 (print at SparkWordCount.scala:55)
18/06/26 10:55:58 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 43)
18/06/26 10:55:58 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:58 INFO DAGScheduler: Submitting ResultStage 44 (ShuffledRDD[44] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:58 INFO MemoryStore: Block broadcast_24 stored as values in memory (estimated size 2.6 KB, free 136.0 KB)
18/06/26 10:55:58 INFO MemoryStore: Block broadcast_24_piece0 stored as bytes in memory (estimated size 1633.0 B, free 137.6 KB)
18/06/26 10:55:58 INFO BlockManagerInfo: Added broadcast_24_piece0 in memory on localhost:55667 (size: 1633.0 B, free: 2.4 GB)
18/06/26 10:55:58 INFO SparkContext: Created broadcast 24 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:58 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 44 (ShuffledRDD[44] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:58 INFO TaskSchedulerImpl: Adding task set 44.0 with 1 tasks
18/06/26 10:55:58 INFO TaskSetManager: Starting task 0.0 in stage 44.0 (TID 24, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:58 INFO Executor: Running task 0.0 in stage 44.0 (TID 24)
18/06/26 10:55:58 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 1 blocks
18/06/26 10:55:58 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:58 INFO Executor: Finished task 0.0 in stage 44.0 (TID 24). 1161 bytes result sent to driver
18/06/26 10:55:58 INFO TaskSetManager: Finished task 0.0 in stage 44.0 (TID 24) in 2 ms on localhost (1/1)
18/06/26 10:55:58 INFO TaskSchedulerImpl: Removed TaskSet 44.0, whose tasks have all completed, from pool 
18/06/26 10:55:58 INFO DAGScheduler: ResultStage 44 (print at SparkWordCount.scala:55) finished in 0.002 s
18/06/26 10:55:58 INFO DAGScheduler: Job 22 finished: print at SparkWordCount.scala:55, took 0.010047 s
-------------------------------------------
Time: 1529981758000 ms
-------------------------------------------
(hello,1)
(world,1)

18/06/26 10:55:58 INFO JobScheduler: Finished job streaming job 1529981758000 ms.0 from job set of time 1529981758000 ms
18/06/26 10:55:58 INFO JobScheduler: Total delay: 0.082 s for time 1529981758000 ms (execution: 0.052 s)
18/06/26 10:55:58 INFO ShuffledRDD: Removing RDD 40 from persistence list
18/06/26 10:55:58 INFO BlockManager: Removing RDD 40
18/06/26 10:55:58 INFO MapPartitionsRDD: Removing RDD 39 from persistence list
18/06/26 10:55:58 INFO BlockManager: Removing RDD 39
18/06/26 10:55:58 INFO MapPartitionsRDD: Removing RDD 38 from persistence list
18/06/26 10:55:58 INFO BlockRDD: Removing RDD 37 from persistence list
18/06/26 10:55:58 INFO BlockManager: Removing RDD 38
18/06/26 10:55:58 INFO BlockManager: Removing RDD 37
18/06/26 10:55:58 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[37] at socketTextStream at SparkWordCount.scala:46 of time 1529981758000 ms
18/06/26 10:55:58 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981756000 ms)
18/06/26 10:55:58 INFO InputInfoTracker: remove old batch metadata: 1529981756000 ms
18/06/26 10:55:58 INFO BlockManagerInfo: Removed input-0-1529981755800 on localhost:55667 in memory (size: 18.0 B, free: 2.4 GB)
18/06/26 10:55:58 INFO MemoryStore: Block input-0-1529981758400 stored as bytes in memory (estimated size 18.0 B, free 137.6 KB)
18/06/26 10:55:58 INFO BlockManagerInfo: Added input-0-1529981758400 in memory on localhost:55667 (size: 18.0 B, free: 2.4 GB)
18/06/26 10:55:58 WARN BlockManager: Block input-0-1529981758400 replicated to only 0 peer(s) instead of 1 peers
18/06/26 10:55:58 INFO BlockGenerator: Pushed block input-0-1529981758400
18/06/26 10:55:59 INFO JobScheduler: Added jobs for time 1529981759000 ms
18/06/26 10:55:59 INFO JobScheduler: Starting job streaming job 1529981759000 ms.0 from job set of time 1529981759000 ms
18/06/26 10:55:59 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:59 INFO DAGScheduler: Registering RDD 47 (map at SparkWordCount.scala:51)
18/06/26 10:55:59 INFO DAGScheduler: Got job 23 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:59 INFO DAGScheduler: Final stage: ResultStage 46 (print at SparkWordCount.scala:55)
18/06/26 10:55:59 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 45)
18/06/26 10:55:59 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 45)
18/06/26 10:55:59 INFO DAGScheduler: Submitting ShuffleMapStage 45 (MapPartitionsRDD[47] at map at SparkWordCount.scala:51), which has no missing parents
18/06/26 10:55:59 INFO MemoryStore: Block broadcast_25 stored as values in memory (estimated size 2.7 KB, free 140.3 KB)
18/06/26 10:55:59 INFO MemoryStore: Block broadcast_25_piece0 stored as bytes in memory (estimated size 1643.0 B, free 141.9 KB)
18/06/26 10:55:59 INFO BlockManagerInfo: Added broadcast_25_piece0 in memory on localhost:55667 (size: 1643.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO SparkContext: Created broadcast 25 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:59 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 45 (MapPartitionsRDD[47] at map at SparkWordCount.scala:51)
18/06/26 10:55:59 INFO TaskSchedulerImpl: Adding task set 45.0 with 1 tasks
18/06/26 10:55:59 INFO TaskSetManager: Starting task 0.0 in stage 45.0 (TID 25, localhost, partition 0,NODE_LOCAL, 2006 bytes)
18/06/26 10:55:59 INFO Executor: Running task 0.0 in stage 45.0 (TID 25)
18/06/26 10:55:59 INFO BlockManager: Found block input-0-1529981758400 locally
18/06/26 10:55:59 INFO Executor: Finished task 0.0 in stage 45.0 (TID 25). 1159 bytes result sent to driver
18/06/26 10:55:59 INFO TaskSetManager: Finished task 0.0 in stage 45.0 (TID 25) in 6 ms on localhost (1/1)
18/06/26 10:55:59 INFO TaskSchedulerImpl: Removed TaskSet 45.0, whose tasks have all completed, from pool 
18/06/26 10:55:59 INFO DAGScheduler: ShuffleMapStage 45 (map at SparkWordCount.scala:51) finished in 0.006 s
18/06/26 10:55:59 INFO DAGScheduler: looking for newly runnable stages
18/06/26 10:55:59 INFO DAGScheduler: running: Set(ResultStage 0)
18/06/26 10:55:59 INFO DAGScheduler: waiting: Set(ResultStage 46)
18/06/26 10:55:59 INFO DAGScheduler: failed: Set()
18/06/26 10:55:59 INFO DAGScheduler: Submitting ResultStage 46 (ShuffledRDD[48] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:59 INFO MemoryStore: Block broadcast_26 stored as values in memory (estimated size 2.6 KB, free 144.5 KB)
18/06/26 10:55:59 INFO MemoryStore: Block broadcast_26_piece0 stored as bytes in memory (estimated size 1631.0 B, free 146.1 KB)
18/06/26 10:55:59 INFO BlockManagerInfo: Added broadcast_26_piece0 in memory on localhost:55667 (size: 1631.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO SparkContext: Created broadcast 26 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:59 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 46 (ShuffledRDD[48] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:59 INFO TaskSchedulerImpl: Adding task set 46.0 with 1 tasks
18/06/26 10:55:59 INFO TaskSetManager: Starting task 0.0 in stage 46.0 (TID 26, localhost, partition 0,NODE_LOCAL, 1894 bytes)
18/06/26 10:55:59 INFO Executor: Running task 0.0 in stage 46.0 (TID 26)
18/06/26 10:55:59 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
18/06/26 10:55:59 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:59 INFO Executor: Finished task 0.0 in stage 46.0 (TID 26). 1330 bytes result sent to driver
18/06/26 10:55:59 INFO TaskSetManager: Finished task 0.0 in stage 46.0 (TID 26) in 2 ms on localhost (1/1)
18/06/26 10:55:59 INFO TaskSchedulerImpl: Removed TaskSet 46.0, whose tasks have all completed, from pool 
18/06/26 10:55:59 INFO DAGScheduler: ResultStage 46 (print at SparkWordCount.scala:55) finished in 0.002 s
18/06/26 10:55:59 INFO DAGScheduler: Job 23 finished: print at SparkWordCount.scala:55, took 0.018360 s
18/06/26 10:55:59 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:55:59 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 11 is 145 bytes
18/06/26 10:55:59 INFO DAGScheduler: Got job 24 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:55:59 INFO DAGScheduler: Final stage: ResultStage 48 (print at SparkWordCount.scala:55)
18/06/26 10:55:59 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 47)
18/06/26 10:55:59 INFO DAGScheduler: Missing parents: List()
18/06/26 10:55:59 INFO DAGScheduler: Submitting ResultStage 48 (ShuffledRDD[48] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:55:59 INFO MemoryStore: Block broadcast_27 stored as values in memory (estimated size 2.6 KB, free 148.7 KB)
18/06/26 10:55:59 INFO MemoryStore: Block broadcast_27_piece0 stored as bytes in memory (estimated size 1633.0 B, free 150.3 KB)
18/06/26 10:55:59 INFO BlockManagerInfo: Added broadcast_27_piece0 in memory on localhost:55667 (size: 1633.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO SparkContext: Created broadcast 27 from broadcast at DAGScheduler.scala:1015
18/06/26 10:55:59 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 48 (ShuffledRDD[48] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_7_piece0 on localhost:55667 in memory (size: 1638.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO TaskSchedulerImpl: Adding task set 48.0 with 1 tasks
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 8
18/06/26 10:55:59 INFO TaskSetManager: Starting task 0.0 in stage 48.0 (TID 27, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:55:59 INFO Executor: Running task 0.0 in stage 48.0 (TID 27)
18/06/26 10:55:59 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 1 blocks
18/06/26 10:55:59 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:55:59 INFO Executor: Finished task 0.0 in stage 48.0 (TID 27). 1161 bytes result sent to driver
18/06/26 10:55:59 INFO TaskSetManager: Finished task 0.0 in stage 48.0 (TID 27) in 2 ms on localhost (1/1)
18/06/26 10:55:59 INFO TaskSchedulerImpl: Removed TaskSet 48.0, whose tasks have all completed, from pool 
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 3
18/06/26 10:55:59 INFO DAGScheduler: ResultStage 48 (print at SparkWordCount.scala:55) finished in 0.003 s
18/06/26 10:55:59 INFO DAGScheduler: Job 24 finished: print at SparkWordCount.scala:55, took 0.016824 s
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_6_piece0 on localhost:55667 in memory (size: 1634.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO JobScheduler: Finished job streaming job 1529981759000 ms.0 from job set of time 1529981759000 ms
18/06/26 10:55:59 INFO JobScheduler: Total delay: 0.054 s for time 1529981759000 ms (execution: 0.044 s)
-------------------------------------------
Time: 1529981759000 ms
-------------------------------------------
(hello,1)
(world,1)

18/06/26 10:55:59 INFO ShuffledRDD: Removing RDD 44 from persistence list
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 7
18/06/26 10:55:59 INFO BlockManager: Removing RDD 44
18/06/26 10:55:59 INFO MapPartitionsRDD: Removing RDD 43 from persistence list
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_5_piece0 on localhost:55667 in memory (size: 1634.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO BlockManager: Removing RDD 43
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 6
18/06/26 10:55:59 INFO MapPartitionsRDD: Removing RDD 42 from persistence list
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 2
18/06/26 10:55:59 INFO BlockManager: Removing RDD 42
18/06/26 10:55:59 INFO BlockRDD: Removing RDD 41 from persistence list
18/06/26 10:55:59 INFO BlockManager: Removing RDD 41
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_4_piece0 on localhost:55667 in memory (size: 1632.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[41] at socketTextStream at SparkWordCount.scala:46 of time 1529981759000 ms
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 5
18/06/26 10:55:59 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981757000 ms)
18/06/26 10:55:59 INFO InputInfoTracker: remove old batch metadata: 1529981757000 ms
18/06/26 10:55:59 INFO BlockManagerInfo: Removed input-0-1529981757000 on localhost:55667 in memory (size: 18.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_3_piece0 on localhost:55667 in memory (size: 1632.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 4
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 1
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 0
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_26_piece0 on localhost:55667 in memory (size: 1631.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 27
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_25_piece0 on localhost:55667 in memory (size: 1643.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 26
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_24_piece0 on localhost:55667 in memory (size: 1633.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 25
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_23_piece0 on localhost:55667 in memory (size: 1633.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 24
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_22_piece0 on localhost:55667 in memory (size: 1643.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 23
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_21_piece0 on localhost:55667 in memory (size: 1631.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 22
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_20_piece0 on localhost:55667 in memory (size: 1633.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 21
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_19_piece0 on localhost:55667 in memory (size: 1643.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 20
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 9
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_18_piece0 on localhost:55667 in memory (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 19
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_17_piece0 on localhost:55667 in memory (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 18
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 8
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_16_piece0 on localhost:55667 in memory (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 17
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_15_piece0 on localhost:55667 in memory (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 16
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 7
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_14_piece0 on localhost:55667 in memory (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 15
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_13_piece0 on localhost:55667 in memory (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 14
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 6
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_12_piece0 on localhost:55667 in memory (size: 1636.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 13
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_11_piece0 on localhost:55667 in memory (size: 1636.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 12
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 5
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_10_piece0 on localhost:55667 in memory (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 11
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_9_piece0 on localhost:55667 in memory (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 10
18/06/26 10:55:59 INFO ContextCleaner: Cleaned shuffle 4
18/06/26 10:55:59 INFO BlockManagerInfo: Removed broadcast_8_piece0 on localhost:55667 in memory (size: 1638.0 B, free: 2.4 GB)
18/06/26 10:55:59 INFO ContextCleaner: Cleaned accumulator 9
18/06/26 10:56:00 INFO JobScheduler: Added jobs for time 1529981760000 ms
18/06/26 10:56:00 INFO JobScheduler: Starting job streaming job 1529981760000 ms.0 from job set of time 1529981760000 ms
18/06/26 10:56:00 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:56:00 INFO DAGScheduler: Registering RDD 51 (map at SparkWordCount.scala:51)
18/06/26 10:56:00 INFO DAGScheduler: Got job 25 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:56:00 INFO DAGScheduler: Final stage: ResultStage 50 (print at SparkWordCount.scala:55)
18/06/26 10:56:00 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 49)
18/06/26 10:56:00 INFO DAGScheduler: Missing parents: List()
18/06/26 10:56:00 INFO DAGScheduler: Submitting ResultStage 50 (ShuffledRDD[52] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:56:00 INFO MemoryStore: Block broadcast_28 stored as values in memory (estimated size 2.6 KB, free 51.9 KB)
18/06/26 10:56:00 INFO MemoryStore: Block broadcast_28_piece0 stored as bytes in memory (estimated size 1637.0 B, free 53.5 KB)
18/06/26 10:56:00 INFO BlockManagerInfo: Added broadcast_28_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:56:00 INFO SparkContext: Created broadcast 28 from broadcast at DAGScheduler.scala:1015
18/06/26 10:56:00 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 50 (ShuffledRDD[52] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:56:00 INFO TaskSchedulerImpl: Adding task set 50.0 with 1 tasks
18/06/26 10:56:00 INFO TaskSetManager: Starting task 0.0 in stage 50.0 (TID 28, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:56:00 INFO Executor: Running task 0.0 in stage 50.0 (TID 28)
18/06/26 10:56:00 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:56:00 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:56:00 INFO Executor: Finished task 0.0 in stage 50.0 (TID 28). 1161 bytes result sent to driver
18/06/26 10:56:00 INFO TaskSetManager: Finished task 0.0 in stage 50.0 (TID 28) in 4 ms on localhost (1/1)
18/06/26 10:56:00 INFO TaskSchedulerImpl: Removed TaskSet 50.0, whose tasks have all completed, from pool 
18/06/26 10:56:00 INFO DAGScheduler: ResultStage 50 (print at SparkWordCount.scala:55) finished in 0.004 s
18/06/26 10:56:00 INFO DAGScheduler: Job 25 finished: print at SparkWordCount.scala:55, took 0.009608 s
18/06/26 10:56:00 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:56:00 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 12 is 82 bytes
18/06/26 10:56:00 INFO DAGScheduler: Got job 26 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:56:00 INFO DAGScheduler: Final stage: ResultStage 52 (print at SparkWordCount.scala:55)
18/06/26 10:56:00 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 51)
18/06/26 10:56:00 INFO DAGScheduler: Missing parents: List()
18/06/26 10:56:00 INFO DAGScheduler: Submitting ResultStage 52 (ShuffledRDD[52] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:56:00 INFO MemoryStore: Block broadcast_29 stored as values in memory (estimated size 2.6 KB, free 56.1 KB)
18/06/26 10:56:00 INFO MemoryStore: Block broadcast_29_piece0 stored as bytes in memory (estimated size 1637.0 B, free 57.7 KB)
18/06/26 10:56:00 INFO BlockManagerInfo: Added broadcast_29_piece0 in memory on localhost:55667 (size: 1637.0 B, free: 2.4 GB)
18/06/26 10:56:00 INFO SparkContext: Created broadcast 29 from broadcast at DAGScheduler.scala:1015
18/06/26 10:56:00 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 52 (ShuffledRDD[52] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:56:00 INFO TaskSchedulerImpl: Adding task set 52.0 with 1 tasks
18/06/26 10:56:00 INFO TaskSetManager: Starting task 0.0 in stage 52.0 (TID 29, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:56:00 INFO Executor: Running task 0.0 in stage 52.0 (TID 29)
18/06/26 10:56:00 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:56:00 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:56:00 INFO Executor: Finished task 0.0 in stage 52.0 (TID 29). 1161 bytes result sent to driver
18/06/26 10:56:00 INFO TaskSetManager: Finished task 0.0 in stage 52.0 (TID 29) in 1 ms on localhost (1/1)
18/06/26 10:56:00 INFO TaskSchedulerImpl: Removed TaskSet 52.0, whose tasks have all completed, from pool 
18/06/26 10:56:00 INFO DAGScheduler: ResultStage 52 (print at SparkWordCount.scala:55) finished in 0.002 s
18/06/26 10:56:00 INFO DAGScheduler: Job 26 finished: print at SparkWordCount.scala:55, took 0.005037 s
-------------------------------------------
Time: 1529981760000 ms
-------------------------------------------

18/06/26 10:56:00 INFO JobScheduler: Finished job streaming job 1529981760000 ms.0 from job set of time 1529981760000 ms
18/06/26 10:56:00 INFO ShuffledRDD: Removing RDD 48 from persistence list
18/06/26 10:56:00 INFO JobScheduler: Total delay: 0.053 s for time 1529981760000 ms (execution: 0.030 s)
18/06/26 10:56:00 INFO BlockManager: Removing RDD 48
18/06/26 10:56:00 INFO MapPartitionsRDD: Removing RDD 47 from persistence list
18/06/26 10:56:00 INFO BlockManager: Removing RDD 47
18/06/26 10:56:00 INFO MapPartitionsRDD: Removing RDD 46 from persistence list
18/06/26 10:56:00 INFO BlockManager: Removing RDD 46
18/06/26 10:56:00 INFO BlockRDD: Removing RDD 45 from persistence list
18/06/26 10:56:00 INFO BlockManager: Removing RDD 45
18/06/26 10:56:00 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[45] at socketTextStream at SparkWordCount.scala:46 of time 1529981760000 ms
18/06/26 10:56:00 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981758000 ms)
18/06/26 10:56:00 INFO InputInfoTracker: remove old batch metadata: 1529981758000 ms
18/06/26 10:56:00 INFO BlockManagerInfo: Removed input-0-1529981758400 on localhost:55667 in memory (size: 18.0 B, free: 2.4 GB)
18/06/26 10:56:01 INFO JobScheduler: Added jobs for time 1529981761000 ms
18/06/26 10:56:01 INFO JobScheduler: Starting job streaming job 1529981761000 ms.0 from job set of time 1529981761000 ms
18/06/26 10:56:01 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:56:01 INFO DAGScheduler: Registering RDD 55 (map at SparkWordCount.scala:51)
18/06/26 10:56:01 INFO DAGScheduler: Got job 27 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:56:01 INFO DAGScheduler: Final stage: ResultStage 54 (print at SparkWordCount.scala:55)
18/06/26 10:56:01 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 53)
18/06/26 10:56:01 INFO DAGScheduler: Missing parents: List()
18/06/26 10:56:01 INFO DAGScheduler: Submitting ResultStage 54 (ShuffledRDD[56] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:56:01 INFO MemoryStore: Block broadcast_30 stored as values in memory (estimated size 2.6 KB, free 60.3 KB)
18/06/26 10:56:01 INFO MemoryStore: Block broadcast_30_piece0 stored as bytes in memory (estimated size 1636.0 B, free 61.8 KB)
18/06/26 10:56:01 INFO BlockManagerInfo: Added broadcast_30_piece0 in memory on localhost:55667 (size: 1636.0 B, free: 2.4 GB)
18/06/26 10:56:01 INFO SparkContext: Created broadcast 30 from broadcast at DAGScheduler.scala:1015
18/06/26 10:56:01 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 54 (ShuffledRDD[56] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:56:01 INFO TaskSchedulerImpl: Adding task set 54.0 with 1 tasks
18/06/26 10:56:01 INFO TaskSetManager: Starting task 0.0 in stage 54.0 (TID 30, localhost, partition 0,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:56:01 INFO Executor: Running task 0.0 in stage 54.0 (TID 30)
18/06/26 10:56:01 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:56:01 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:56:01 INFO Executor: Finished task 0.0 in stage 54.0 (TID 30). 1161 bytes result sent to driver
18/06/26 10:56:01 INFO TaskSetManager: Finished task 0.0 in stage 54.0 (TID 30) in 2 ms on localhost (1/1)
18/06/26 10:56:01 INFO TaskSchedulerImpl: Removed TaskSet 54.0, whose tasks have all completed, from pool 
18/06/26 10:56:01 INFO DAGScheduler: ResultStage 54 (print at SparkWordCount.scala:55) finished in 0.002 s
18/06/26 10:56:01 INFO DAGScheduler: Job 27 finished: print at SparkWordCount.scala:55, took 0.006114 s
18/06/26 10:56:01 INFO SparkContext: Starting job: print at SparkWordCount.scala:55
18/06/26 10:56:01 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 13 is 82 bytes
18/06/26 10:56:01 INFO DAGScheduler: Got job 28 (print at SparkWordCount.scala:55) with 1 output partitions
18/06/26 10:56:01 INFO DAGScheduler: Final stage: ResultStage 56 (print at SparkWordCount.scala:55)
18/06/26 10:56:01 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 55)
18/06/26 10:56:01 INFO DAGScheduler: Missing parents: List()
18/06/26 10:56:01 INFO DAGScheduler: Submitting ResultStage 56 (ShuffledRDD[56] at reduceByKey at SparkWordCount.scala:52), which has no missing parents
18/06/26 10:56:01 INFO MemoryStore: Block broadcast_31 stored as values in memory (estimated size 2.6 KB, free 64.4 KB)
18/06/26 10:56:01 INFO MemoryStore: Block broadcast_31_piece0 stored as bytes in memory (estimated size 1636.0 B, free 66.0 KB)
18/06/26 10:56:01 INFO BlockManagerInfo: Added broadcast_31_piece0 in memory on localhost:55667 (size: 1636.0 B, free: 2.4 GB)
18/06/26 10:56:01 INFO SparkContext: Created broadcast 31 from broadcast at DAGScheduler.scala:1015
18/06/26 10:56:01 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 56 (ShuffledRDD[56] at reduceByKey at SparkWordCount.scala:52)
18/06/26 10:56:01 INFO TaskSchedulerImpl: Adding task set 56.0 with 1 tasks
18/06/26 10:56:01 INFO TaskSetManager: Starting task 0.0 in stage 56.0 (TID 31, localhost, partition 1,PROCESS_LOCAL, 1894 bytes)
18/06/26 10:56:01 INFO Executor: Running task 0.0 in stage 56.0 (TID 31)
18/06/26 10:56:01 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
18/06/26 10:56:01 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
18/06/26 10:56:01 INFO Executor: Finished task 0.0 in stage 56.0 (TID 31). 1161 bytes result sent to driver
18/06/26 10:56:01 INFO TaskSetManager: Finished task 0.0 in stage 56.0 (TID 31) in 2 ms on localhost (1/1)
18/06/26 10:56:01 INFO TaskSchedulerImpl: Removed TaskSet 56.0, whose tasks have all completed, from pool 
18/06/26 10:56:01 INFO DAGScheduler: ResultStage 56 (print at SparkWordCount.scala:55) finished in 0.002 s
18/06/26 10:56:01 INFO DAGScheduler: Job 28 finished: print at SparkWordCount.scala:55, took 0.006372 s
-------------------------------------------
Time: 1529981761000 ms
-------------------------------------------
18/06/26 10:56:01 INFO JobScheduler: Finished job streaming job 1529981761000 ms.0 from job set of time 1529981761000 ms
18/06/26 10:56:01 INFO JobScheduler: Total delay: 0.024 s for time 1529981761000 ms (execution: 0.018 s)
18/06/26 10:56:01 INFO ShuffledRDD: Removing RDD 52 from persistence list
18/06/26 10:56:01 INFO BlockManager: Removing RDD 52
18/06/26 10:56:01 INFO MapPartitionsRDD: Removing RDD 51 from persistence list
18/06/26 10:56:01 INFO BlockManager: Removing RDD 51
18/06/26 10:56:01 INFO MapPartitionsRDD: Removing RDD 50 from persistence list
18/06/26 10:56:01 INFO BlockRDD: Removing RDD 49 from persistence list
18/06/26 10:56:01 INFO SocketInputDStream: Removing blocks of RDD BlockRDD[49] at socketTextStream at SparkWordCount.scala:46 of time 1529981761000 ms
18/06/26 10:56:01 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1529981759000 ms)
18/06/26 10:56:01 INFO InputInfoTracker: remove old batch metadata: 1529981759000 ms
18/06/26 10:56:01 INFO BlockManager: Removing RDD 49
18/06/26 10:56:01 INFO BlockManager: Removing RDD 50
&lt;/code&gt;&lt;/pre&gt;</description>
        </item>
        <item>
        <title>kafka &#43; spark streaming(1)</title>
        <link>https://blog.zrj.me/posts/2018-03-30-kafka-spark-streaming/</link>
        <pubDate>Fri, 30 Mar 2018 17:51:05 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2018-03-30-kafka-spark-streaming/</guid>
        <description>&lt;p&gt;之前写过一个 kafka + spark streaming 的测试用例，但是当时没有记录下来，这部分的东西还是很重要的，需要找时间回头补上&lt;/p&gt;
&lt;p&gt;看到这里有一个教程， &lt;a class=&#34;link&#34; href=&#34;http://colobu.com/2015/01/05/kafka-spark-streaming-integration-summary/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://colobu.com/2015/01/05/kafka-spark-streaming-integration-summary/&lt;/a&gt; 写的挺不错&lt;/p&gt;
&lt;p&gt;-&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&lt;/p&gt;
&lt;p&gt;2018-11-12 21:26:25 追加&lt;/p&gt;
&lt;p&gt;终于来补上这个坑，一晃都快一年过去了，真的是。。。&lt;/p&gt;
&lt;p&gt;参考这里安装好 kafka， &lt;a class=&#34;link&#34; href=&#34;https://segmentfault.com/a/1190000012730949&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://segmentfault.com/a/1190000012730949&lt;/a&gt; ，我们使用的是 0.11 的版本，这里主要考虑，kafka 在跨入 1.0 版本号之后的改动比较大，为了不忘传统，也为了更好的模拟实际使用场景，先熟悉 1.0 之前的版本再说&lt;/p&gt;
&lt;p&gt;首先安装好单机版，计划先把单机版能跑起来，重点是把后面的 spark streaming 的环节整体能贯通，抓住主要矛盾，然后再回头来折腾 kafka 的单机伪分布式和多机版本&lt;/p&gt;
&lt;p&gt;很顺利的就跑通了在命令行下的创建 topic，生产，消费，查看等命令，这个感觉还是很好的，哈哈&lt;/p&gt;
&lt;p&gt;然后开始试着在 spark streaming 中试着消费一下，这个时候就发现在 pom.xml 中加上 streaming 的依赖开始出问题了，maven update 出不来了，查了一下，说是在公司需要配置一个代理，但是配置了，也依然没用，又怀疑是配置没有 reload 生效，查到这里， &lt;a class=&#34;link&#34; href=&#34;https://blog.csdn.net/hello5orld/article/details/13772233&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://blog.csdn.net/hello5orld/article/details/13772233&lt;/a&gt; ，说&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;在Preferences&amp;ndash;&amp;gt;Maven&amp;ndash;&amp;gt;User Settings中，点击Update Settings，加载刚才我们对settings.xml的更改&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;但是照着做了，也依然没有什么卵用，于是又仔细去看 error log，看到有一句提示&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Description	Resource	Path	Location	Type
The container &amp;#39;Maven Dependencies&amp;#39; references non existing library &amp;#39;C:\Users\adenzhang\.m2\repository\org\apache\spark\spark-core_2.11\1.6.3\spark-core_2.11-1.6.3.jar&amp;#39;	test20181111		Build path	Build Path Problem
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;于是怀疑是之前的一些残留文件导致拉取失败，那么把这个目录整个都删除了，这次终于可以了，通过内网拉取包，速度还是杠杠的&lt;/p&gt;
&lt;p&gt;然后参考这里， &lt;a class=&#34;link&#34; href=&#34;https://blog.csdn.net/WinWill2012/article/details/71628179&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://blog.csdn.net/WinWill2012/article/details/71628179&lt;/a&gt; 加上依赖&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-xml&#34; data-lang=&#34;xml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.spark&lt;span class=&#34;nt&#34;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;spark-streaming_2.10&lt;span class=&#34;nt&#34;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.5.2&lt;span class=&#34;nt&#34;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;不过我加的是 1.6.3 版本的&lt;/p&gt;
&lt;p&gt;streaming 的读取 socket 没问题之后，就参考这里 &lt;a class=&#34;link&#34; href=&#34;https://www.cnblogs.com/xlturing/p/6246538.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://www.cnblogs.com/xlturing/p/6246538.html&lt;/a&gt; 加上 kafka 的 dependency&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-xml&#34; data-lang=&#34;xml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;&lt;span class=&#34;c&#34;&gt;&amp;lt;!-- Spark Streaming Kafka --&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.spark&lt;span class=&#34;nt&#34;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;spark-streaming-kafka_2.10&lt;span class=&#34;nt&#34;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.6.3&lt;span class=&#34;nt&#34;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;然后开始写 kafka 的消费代码，参考这里， &lt;a class=&#34;link&#34; href=&#34;https://www.ibm.com/developerworks/cn/opensource/os-cn-spark-practice2/index.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://www.ibm.com/developerworks/cn/opensource/os-cn-spark-practice2/index.html&lt;/a&gt; ，但是原来的排版乱了，重新排了一下，如下&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-scala&#34; data-lang=&#34;scala&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;package&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;me.zrj.test.test20181111&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.SparkConf&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.streaming.Seconds&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.streaming.StreamingContext&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.streaming.kafka.KafkaUtils&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.HashPartitioner&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nn&#34;&gt;org.apache.spark.streaming.Duration&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;object&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;WebPagePopularityValueCalculator&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;private&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;checkpointDir&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;&amp;#34;popularity-data-checkpoint&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;private&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;msgConsumerGroup&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;&amp;#34;user-behavior-topic-message-consumer-group&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;main&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;args&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;Array&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;kt&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;])&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;args&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;length&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;&amp;lt;&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;2&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;n&#34;&gt;println&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;Usage:WebPagePopularityValueCalculator zkserver1:2181,zkserver2:2181,zkserver3:2181 consumeMsgDataTimeInterval(secs)&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nc&#34;&gt;System&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;exit&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;Array&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;zkServers&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;processingInterval&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;args&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;conf&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;SparkConf&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;().&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setAppName&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;Web Page Popularity Value Calculator&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;StreamingContext&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;conf&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;Seconds&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;processingInterval&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;toInt&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;//using updateStateByKey asks for enabling checkpoint
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;checkpoint&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;checkpointDir&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;kafkaStream&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;KafkaUtils&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;createStream&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;c1&#34;&gt;//Spark streaming context
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;c1&#34;&gt;//zookeeper quorum. e.g zkserver1:2181,zkserver2:2181,...
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;n&#34;&gt;zkServers&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;c1&#34;&gt;//kafka message consumer group ID
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;n&#34;&gt;msgConsumerGroup&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;c1&#34;&gt;//Map of (topic_name -&amp;gt; numPartitions) to consume. Each partition is consumed in its own thread
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nc&#34;&gt;Map&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;user-behavior-topic&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;3&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;msgDataRDD&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;kafkaStream&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;map&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;_&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;_2&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;//for debug use only
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;//println(&amp;#34;Coming data in this interval...&amp;#34;)
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;//msgDataRDD.print()
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// e.g page37|5|1.5119122|-1
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;popularityData&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;msgDataRDD&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;map&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;msgLine&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;dataArr&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;Array&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;kt&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;]&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;msgLine&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;split&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;\\|&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;pageID&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;dataArr&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;c1&#34;&gt;//calculate the popularity value
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;popValue&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;Double&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;dataArr&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;toFloat&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;*&lt;/span&gt; &lt;span class=&#34;mf&#34;&gt;0.8&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;+&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;dataArr&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;2&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;toFloat&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;*&lt;/span&gt; &lt;span class=&#34;mf&#34;&gt;0.8&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;+&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;dataArr&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;3&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;toFloat&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;*&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;pageID&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;popValue&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;//sum the previous popularity value and current value
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;updatePopularityValue&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;iterator&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;Iterator&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;[(&lt;/span&gt;&lt;span class=&#34;kt&#34;&gt;String&lt;/span&gt;, &lt;span class=&#34;kt&#34;&gt;Seq&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;kt&#34;&gt;Double&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;]&lt;/span&gt;, &lt;span class=&#34;kt&#34;&gt;Option&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;kt&#34;&gt;Double&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;])])&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;n&#34;&gt;iterator&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;flatMap&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;t&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;newValue&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;Double&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;_2&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sum&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;stateValue&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;Double&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;_3&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;getOrElse&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;);&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;nc&#34;&gt;Some&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;newValue&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;+&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;stateValue&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;o&#34;&gt;}.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;map&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sumedValue&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;_1&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;sumedValue&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;initialRDD&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sparkContext&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;parallelize&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nc&#34;&gt;List&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;((&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;page1&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mf&#34;&gt;0.00&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;stateDstream&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;popularityData&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;updateStateByKey&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;kt&#34;&gt;Double&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;](&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;updatePopularityValue&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;HashPartitioner&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sparkContext&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;defaultParallelism&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;),&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;initialRDD&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;//set the checkpoint interval to avoid too frequently data checkpoint which may
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;//may significantly reduce operation throughput
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;stateDstream&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;checkpoint&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nc&#34;&gt;Duration&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;8&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;*&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;processingInterval&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;toInt&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;*&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;1000&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;//after calculation, we need to sort the result and only show the top 10 hot pages
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;stateDstream&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;foreachRDD&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;rdd&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;sortedData&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;rdd&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;map&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;case&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;k&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;v&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;v&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;k&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;}.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sortByKey&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;topKData&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;sortedData&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;take&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;10&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;map&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;case&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;v&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;k&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;k&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;v&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;n&#34;&gt;topKData&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;foreach&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;          &lt;span class=&#34;n&#34;&gt;println&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;o&#34;&gt;})&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;start&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;awaitTermination&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;但是上述的代码逻辑比较复杂，我的简单很多&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-scala&#34; data-lang=&#34;scala&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;k&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;kafkaStreaming&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// 创建StreamingContext，1秒一个批次
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;StreamingContext&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;SparkConf&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;().&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setMaster&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;local[2]&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;setAppName&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;kafka-streaming-1112&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;),&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;Seconds&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;5&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;));&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;checkpoint&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;file:///D://spark-tmp&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;kafkaStream&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nc&#34;&gt;KafkaUtils&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;createStream&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;zkQuorum&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;192.168.56.101:2181&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;groupId&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;test-topic-1112-consumer-group&amp;#34;&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;topics&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;nc&#34;&gt;Map&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;test&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;kafkaStream&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;print&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;dstreamkafka&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;kafkaStream&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;map&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;_&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;_2&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;dstreamkafka&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;print&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;start&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;n&#34;&gt;ssc&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;awaitTermination&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;跑起来报了个异常&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;18/11/12 20:27:12 INFO KafkaReceiver: Starting MessageHandler.
18/11/12 20:27:12 INFO VerifiableProperties: Verifying properties
18/11/12 20:27:12 INFO VerifiableProperties: Property client.id is overridden to test-topic-1112-consumer-group
18/11/12 20:27:12 INFO VerifiableProperties: Property metadata.broker.list is overridden to localhost.localdomain:9092
18/11/12 20:27:12 INFO VerifiableProperties: Property request.timeout.ms is overridden to 30000
18/11/12 20:27:12 INFO ClientUtils$: Fetching metadata from broker id:1,host:localhost.localdomain,port:9092 with correlation id 0 for 1 topic(s) Set(test)
18/11/12 20:27:12 INFO SyncProducer: Connected to localhost.localdomain:9092 for producing
18/11/12 20:27:12 INFO SyncProducer: Disconnecting from localhost.localdomain:9092
18/11/12 20:27:12 WARN ClientUtils$: Fetching topic metadata with correlation id 0 for topics [Set(test)] from broker [id:1,host:localhost.localdomain,port:9092] failed
java.nio.channels.ClosedChannelException
	at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
	at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
	at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
	at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
	at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
	at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
18/11/12 20:27:12 INFO SyncProducer: Disconnecting from localhost.localdomain:9092
18/11/12 20:27:12 WARN ConsumerFetcherManager$LeaderFinderThread: [test-topic-1112-consumer-group_adenzhang-PC2-1542025622919-8e16cf3b-leader-finder-thread], Failed to find leader for Set([test,0])
kafka.common.KafkaException: fetching topic metadata for topics [Set(test)] from broker [ArrayBuffer(id:1,host:localhost.localdomain,port:9092)] failed
	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
	at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
	at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
Caused by: java.nio.channels.ClosedChannelException
	at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
	at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
	at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
	at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
	at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
	... 3 more
18/11/12 20:27:12 INFO ConsumerFetcherManager: [ConsumerFetcherManager-1542025632303] Added fetcher for partitions ArrayBuffer()
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;这个应该就是 id:1,host:localhost.localdomain,port:9092 这个配置有问题了，因为我的 kafka 是在 virtual box 里面的虚拟机的，而 eclipse 是在外面的 Windows 的，先 telnet 一下 9092 这个端口，确认是开着的，就拿着 localhost.localdomain 这个串去配置文件里搜一搜，发现居然没找到，那么就拆开来搜&lt;/p&gt;
&lt;p&gt;分别搜到的配置文件分散在这些地方&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[adenzhang@localhost config]$ grep &amp;#34;localhost&amp;#34; *
connect-distributed.properties:bootstrap.servers=localhost:9092
connect-standalone.properties:bootstrap.servers=localhost:9092
producer.properties:bootstrap.servers=localhost:9092
server.properties:zookeeper.connect=localhost:2181

[adenzhang@localhost config]$ grep &amp;#34;localdomain&amp;#34; *

[adenzhang@localhost config]$ grep 9092 *
connect-distributed.properties:bootstrap.servers=localhost:9092
connect-standalone.properties:bootstrap.servers=localhost:9092
producer.properties:bootstrap.servers=localhost:9092
server.properties:#     listeners = PLAINTEXT://your.host.name:9092
server.properties:#listeners=PLAINTEXT://:9092
server.properties:#advertised.listeners=PLAINTEXT://your.host.name:9092
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;localdomain 这个神奇的居然没有搜到，难道是代码中自动加上的？&lt;/p&gt;
&lt;p&gt;不管那么多，先把这几个文件中的配置都给改成 ip 再说，完了重启 kafka，再启动 spark streaming 看看，发现还是不行，那只能放狗搜了&lt;/p&gt;
&lt;p&gt;看到这里， &lt;a class=&#34;link&#34; href=&#34;https://stackoverflow.com/questions/30606447/kafka-consumer-fetching-metadata-for-topics-failed&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://stackoverflow.com/questions/30606447/kafka-consumer-fetching-metadata-for-topics-failed&lt;/a&gt; ，说&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The broker tells the client which hostname should be used to produce/consume messages. By default Kafka uses the hostname of the system it runs on. If this hostname can not be resolved by the client side you get this exception.&lt;/p&gt;
&lt;p&gt;You can try setting advertised.host.name in the Kafka configuration to an hostname/address which the clients should use.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;但是问题是 advertised.host.name 这个配置项貌似并没有找到，甚至被注释起来的也没有，如果一个配置项这么重要的话，感觉应该会有留痕才对的，于是感觉是不是版本问题，看到下面第二高票的答案是&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Here is my way to solve this problem:&lt;/p&gt;
&lt;p&gt;run bin/kafka-server-stop.sh to stop running kafka server. modify the properties file config/server.properties by adding a line: listeners=PLAINTEXT://{ip.of.your.kafka.server}:9092 restart kafka server. Since without the lisener setting, kafka will use java.net.InetAddress.getCanonicalHostName() to get the address which the socket server listens on.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;搜了一下，这个还是有的，而且就在 server.properties 的靠前位置，默认是这样的&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for &amp;#34;listeners&amp;#34; if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;这个看起来比较靠谱，而且下面这个 advertised.listeners 其实不用改，因为他说了会用 listeners 的值，那么只改 listeners 就可以了&lt;/p&gt;
&lt;p&gt;这回可以了，但是在尝试用命令行去生产数据的时候报错&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;[2018-11-12 21:14:20,006] WARN Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;看了下，命令行是&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;改成&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;bin/kafka-console-producer.sh --broker-list 192.168.56.101:9092 --topic test
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;就可以了&lt;/p&gt;
&lt;p&gt;日志输出如下&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;18/11/12 21:16:25 INFO JobScheduler: Finished job streaming job 1542028585000 ms.0 from job set of time 1542028585000 ms
18/11/12 21:16:25 INFO JobScheduler: Starting job streaming job 1542028585000 ms.1 from job set of time 1542028585000 ms
18/11/12 21:16:25 INFO SparkContext: Starting job: print at SSTest20181111.scala:85
18/11/12 21:16:25 INFO DAGScheduler: Got job 7 (print at SSTest20181111.scala:85) with 1 output partitions
18/11/12 21:16:25 INFO DAGScheduler: Final stage: ResultStage 7 (print at SSTest20181111.scala:85)
18/11/12 21:16:25 INFO DAGScheduler: Parents of final stage: List()
18/11/12 21:16:25 INFO DAGScheduler: Missing parents: List()
18/11/12 21:16:25 INFO DAGScheduler: Submitting ResultStage 7 (MapPartitionsRDD[108] at map at SSTest20181111.scala:84), which has no missing parents
18/11/12 21:16:25 INFO MemoryStore: Block broadcast_7 stored as values in memory (estimated size 1648.0 B, free 2.4 GB)
18/11/12 21:16:25 INFO MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 1097.0 B, free 2.4 GB)
18/11/12 21:16:25 INFO BlockManagerInfo: Added broadcast_7_piece0 in memory on localhost:49533 (size: 1097.0 B, free: 2.4 GB)
18/11/12 21:16:25 INFO SparkContext: Created broadcast 7 from broadcast at DAGScheduler.scala:1006
18/11/12 21:16:25 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 7 (MapPartitionsRDD[108] at map at SSTest20181111.scala:84)
18/11/12 21:16:25 INFO TaskSchedulerImpl: Adding task set 7.0 with 1 tasks
18/11/12 21:16:25 INFO TaskSetManager: Starting task 0.0 in stage 7.0 (TID 9, localhost, partition 0,NODE_LOCAL, 1936 bytes)
18/11/12 21:16:25 INFO Executor: Running task 0.0 in stage 7.0 (TID 9)
18/11/12 21:16:25 INFO BlockManager: Found block input-0-1542028583000 locally
18/11/12 21:16:25 INFO Executor: Finished task 0.0 in stage 7.0 (TID 9). 937 bytes result sent to driver
18/11/12 21:16:25 INFO TaskSetManager: Finished task 0.0 in stage 7.0 (TID 9) in 4 ms on localhost (1/1)
18/11/12 21:16:25 INFO TaskSchedulerImpl: Removed TaskSet 7.0, whose tasks have all completed, from pool 
18/11/12 21:16:25 INFO DAGScheduler: ResultStage 7 (print at SSTest20181111.scala:85) finished in 0.004 s
18/11/12 21:16:25 INFO DAGScheduler: Job 7 finished: print at SSTest20181111.scala:85, took 0.011404 s
18/11/12 21:16:25 INFO SparkContext: Starting job: print at SSTest20181111.scala:85
18/11/12 21:16:25 INFO DAGScheduler: Got job 8 (print at SSTest20181111.scala:85) with 1 output partitions
18/11/12 21:16:25 INFO DAGScheduler: Final stage: ResultStage 8 (print at SSTest20181111.scala:85)
18/11/12 21:16:25 INFO DAGScheduler: Parents of final stage: List()
18/11/12 21:16:25 INFO DAGScheduler: Missing parents: List()
18/11/12 21:16:25 INFO DAGScheduler: Submitting ResultStage 8 (MapPartitionsRDD[108] at map at SSTest20181111.scala:84), which has no missing parents
18/11/12 21:16:25 INFO MemoryStore: Block broadcast_8 stored as values in memory (estimated size 1648.0 B, free 2.4 GB)
18/11/12 21:16:25 INFO MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 1097.0 B, free 2.4 GB)
18/11/12 21:16:25 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on localhost:49533 (size: 1097.0 B, free: 2.4 GB)
18/11/12 21:16:25 INFO SparkContext: Created broadcast 8 from broadcast at DAGScheduler.scala:1006
18/11/12 21:16:25 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 8 (MapPartitionsRDD[108] at map at SSTest20181111.scala:84)
18/11/12 21:16:25 INFO TaskSchedulerImpl: Adding task set 8.0 with 1 tasks
18/11/12 21:16:25 INFO TaskSetManager: Starting task 0.0 in stage 8.0 (TID 10, localhost, partition 1,NODE_LOCAL, 1936 bytes)
18/11/12 21:16:25 INFO Executor: Running task 0.0 in stage 8.0 (TID 10)
18/11/12 21:16:25 INFO BlockManager: Found block input-0-1542028584400 locally
18/11/12 21:16:25 INFO Executor: Finished task 0.0 in stage 8.0 (TID 10). 937 bytes result sent to driver
18/11/12 21:16:25 INFO TaskSetManager: Finished task 0.0 in stage 8.0 (TID 10) in 3 ms on localhost (1/1)
18/11/12 21:16:25 INFO TaskSchedulerImpl: Removed TaskSet 8.0, whose tasks have all completed, from pool 
18/11/12 21:16:25 INFO DAGScheduler: ResultStage 8 (print at SSTest20181111.scala:85) finished in 0.003 s
18/11/12 21:16:25 INFO DAGScheduler: Job 8 finished: print at SSTest20181111.scala:85, took 0.007842 s
-------------------------------------------
Time: 1542028585000 ms
-------------------------------------------
2018-11-12 21:16:21
2018-11-12 21:16:23

18/11/12 21:16:25 INFO JobScheduler: Finished job streaming job 1542028585000 ms.1 from job set of time 1542028585000 ms
18/11/12 21:16:25 INFO BlockRDD: Removing RDD 105 from persistence list
18/11/12 21:16:25 INFO JobScheduler: Total delay: 0.094 s for time 1542028585000 ms (execution: 0.086 s)
18/11/12 21:16:25 INFO BlockManager: Removing RDD 105
18/11/12 21:16:25 INFO KafkaInputDStream: Removing blocks of RDD BlockRDD[105] at createStream at SSTest20181111.scala:81 of time 1542028585000 ms
18/11/12 21:16:25 INFO MapPartitionsRDD: Removing RDD 106 from persistence list
18/11/12 21:16:25 INFO JobGenerator: Checkpointing graph for time 1542028585000 ms
18/11/12 21:16:25 INFO DStreamGraph: Updating checkpoint data for time 1542028585000 ms
18/11/12 21:16:25 INFO DStreamGraph: Updated checkpoint data for time 1542028585000 ms
18/11/12 21:16:25 INFO CheckpointWriter: Submitted checkpoint of time 1542028585000 ms writer queue
18/11/12 21:16:25 INFO CheckpointWriter: Saving checkpoint for time 1542028585000 ms to file &amp;#39;file:/D:/spark-tmp/checkpoint-1542028585000&amp;#39;
18/11/12 21:16:25 INFO BlockManager: Removing RDD 106
18/11/12 21:16:25 INFO BlockManagerInfo: Removed input-0-1542028579400 on localhost:49533 in memory (size: 92.0 B, free: 2.4 GB)
18/11/12 21:16:25 INFO BlockManagerInfo: Removed input-0-1542028576800 on localhost:49533 in memory (size: 92.0 B, free: 2.4 GB)
18/11/12 21:16:25 INFO BlockManagerInfo: Removed input-0-1542028574800 on localhost:49533 in memory (size: 76.0 B, free: 2.4 GB)
18/11/12 21:16:25 INFO CheckpointWriter: Deleting file:/D:/spark-tmp/checkpoint-1542028560000
18/11/12 21:16:25 INFO CheckpointWriter: Checkpoint for time 1542028585000 ms saved to file &amp;#39;file:/D:/spark-tmp/checkpoint-1542028585000&amp;#39;, took 3100 bytes and 7 ms
18/11/12 21:16:25 INFO DStreamGraph: Clearing checkpoint data for time 1542028585000 ms
18/11/12 21:16:25 INFO DStreamGraph: Cleared checkpoint data for time 1542028585000 ms
18/11/12 21:16:25 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer(1542028575000 ms)
18/11/12 21:16:25 INFO WriteAheadLogManager  for Thread: Attempting to clear 0 old log files in file:/D:/spark-tmp/receivedBlockMetadata older than 1542028580000: 
18/11/12 21:16:25 INFO InputInfoTracker: remove old batch metadata: 1542028575000 ms
18/11/12 21:16:26 INFO MemoryStore: Block input-0-1542028586200 stored as bytes in memory (estimated size 92.0 B, free 2.4 GB)
18/11/12 21:16:26 INFO BlockManagerInfo: Added input-0-1542028586200 in memory on localhost:49533 (size: 92.0 B, free: 2.4 GB)
18/11/12 21:16:26 WARN BlockManager: Block input-0-1542028586200 replicated to only 0 peer(s) instead of 1 peers
18/11/12 21:16:26 INFO BlockGenerator: Pushed block input-0-1542028586200
18/11/12 21:16:29 INFO MemoryStore: Block input-0-1542028588800 stored as bytes in memory (estimated size 92.0 B, free 2.4 GB)
18/11/12 21:16:29 INFO BlockManagerInfo: Added input-0-1542028588800 in memory on localhost:49533 (size: 92.0 B, free: 2.4 GB)
18/11/12 21:16:29 WARN BlockManager: Block input-0-1542028588800 replicated to only 0 peer(s) instead of 1 peers
18/11/12 21:16:29 INFO BlockGenerator: Pushed block input-0-1542028588800
18/11/12 21:16:30 INFO JobScheduler: Added jobs for time 1542028590000 ms
18/11/12 21:16:30 INFO JobGenerator: Checkpointing graph for time 1542028590000 ms
18/11/12 21:16:30 INFO DStreamGraph: Updating checkpoint data for time 1542028590000 ms
18/11/12 21:16:30 INFO DStreamGraph: Updated checkpoint data for time 1542028590000 ms
18/11/12 21:16:30 INFO JobScheduler: Starting job streaming job 1542028590000 ms.0 from job set of time 1542028590000 ms
18/11/12 21:16:30 INFO CheckpointWriter: Submitted checkpoint of time 1542028590000 ms writer queue
18/11/12 21:16:30 INFO CheckpointWriter: Saving checkpoint for time 1542028590000 ms to file &amp;#39;file:/D:/spark-tmp/checkpoint-1542028590000&amp;#39;
18/11/12 21:16:30 INFO SparkContext: Starting job: print at SSTest20181111.scala:83
18/11/12 21:16:30 INFO DAGScheduler: Got job 9 (print at SSTest20181111.scala:83) with 1 output partitions
18/11/12 21:16:30 INFO DAGScheduler: Final stage: ResultStage 9 (print at SSTest20181111.scala:83)
18/11/12 21:16:30 INFO DAGScheduler: Parents of final stage: List()
18/11/12 21:16:30 INFO DAGScheduler: Missing parents: List()
18/11/12 21:16:30 INFO DAGScheduler: Submitting ResultStage 9 (BlockRDD[109] at createStream at SSTest20181111.scala:81), which has no missing parents
18/11/12 21:16:30 INFO CheckpointWriter: Deleting file:/D:/spark-tmp/checkpoint-1542028565000.bk
18/11/12 21:16:30 INFO MemoryStore: Block broadcast_9 stored as values in memory (estimated size 1128.0 B, free 2.4 GB)
18/11/12 21:16:30 INFO CheckpointWriter: Checkpoint for time 1542028590000 ms saved to file &amp;#39;file:/D:/spark-tmp/checkpoint-1542028590000&amp;#39;, took 3104 bytes and 5 ms
18/11/12 21:16:30 INFO MemoryStore: Block broadcast_9_piece0 stored as bytes in memory (estimated size 757.0 B, free 2.4 GB)
18/11/12 21:16:30 INFO BlockManagerInfo: Added broadcast_9_piece0 in memory on localhost:49533 (size: 757.0 B, free: 2.4 GB)
18/11/12 21:16:30 INFO SparkContext: Created broadcast 9 from broadcast at DAGScheduler.scala:1006
18/11/12 21:16:30 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 9 (BlockRDD[109] at createStream at SSTest20181111.scala:81)
18/11/12 21:16:30 INFO TaskSchedulerImpl: Adding task set 9.0 with 1 tasks
18/11/12 21:16:30 INFO TaskSetManager: Starting task 0.0 in stage 9.0 (TID 11, localhost, partition 0,NODE_LOCAL, 1936 bytes)
18/11/12 21:16:30 INFO Executor: Running task 0.0 in stage 9.0 (TID 11)
18/11/12 21:16:30 INFO BlockManager: Found block input-0-1542028586200 locally
18/11/12 21:16:30 INFO Executor: Finished task 0.0 in stage 9.0 (TID 11). 999 bytes result sent to driver
18/11/12 21:16:30 INFO DAGScheduler: ResultStage 9 (print at SSTest20181111.scala:83) finished in 0.002 s
18/11/12 21:16:30 INFO TaskSetManager: Finished task 0.0 in stage 9.0 (TID 11) in 1 ms on localhost (1/1)
18/11/12 21:16:30 INFO TaskSchedulerImpl: Removed TaskSet 9.0, whose tasks have all completed, from pool 
18/11/12 21:16:30 INFO DAGScheduler: Job 9 finished: print at SSTest20181111.scala:83, took 0.006363 s
18/11/12 21:16:30 INFO SparkContext: Starting job: print at SSTest20181111.scala:83
18/11/12 21:16:30 INFO DAGScheduler: Got job 10 (print at SSTest20181111.scala:83) with 1 output partitions
18/11/12 21:16:30 INFO DAGScheduler: Final stage: ResultStage 10 (print at SSTest20181111.scala:83)
18/11/12 21:16:30 INFO DAGScheduler: Parents of final stage: List()
18/11/12 21:16:30 INFO DAGScheduler: Missing parents: List()
18/11/12 21:16:30 INFO DAGScheduler: Submitting ResultStage 10 (BlockRDD[109] at createStream at SSTest20181111.scala:81), which has no missing parents
18/11/12 21:16:30 INFO MemoryStore: Block broadcast_10 stored as values in memory (estimated size 1128.0 B, free 2.4 GB)
18/11/12 21:16:30 INFO MemoryStore: Block broadcast_10_piece0 stored as bytes in memory (estimated size 757.0 B, free 2.4 GB)
18/11/12 21:16:30 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on localhost:49533 (size: 757.0 B, free: 2.4 GB)
18/11/12 21:16:30 INFO SparkContext: Created broadcast 10 from broadcast at DAGScheduler.scala:1006
18/11/12 21:16:30 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 10 (BlockRDD[109] at createStream at SSTest20181111.scala:81)
18/11/12 21:16:30 INFO TaskSchedulerImpl: Adding task set 10.0 with 1 tasks
18/11/12 21:16:30 INFO TaskSetManager: Starting task 0.0 in stage 10.0 (TID 12, localhost, partition 1,NODE_LOCAL, 1936 bytes)
18/11/12 21:16:30 INFO Executor: Running task 0.0 in stage 10.0 (TID 12)
18/11/12 21:16:30 INFO BlockManager: Found block input-0-1542028588800 locally
18/11/12 21:16:30 INFO Executor: Finished task 0.0 in stage 10.0 (TID 12). 999 bytes result sent to driver
18/11/12 21:16:30 INFO TaskSetManager: Finished task 0.0 in stage 10.0 (TID 12) in 1 ms on localhost (1/1)
18/11/12 21:16:30 INFO TaskSchedulerImpl: Removed TaskSet 10.0, whose tasks have all completed, from pool 
18/11/12 21:16:30 INFO DAGScheduler: ResultStage 10 (print at SSTest20181111.scala:83) finished in 0.002 s
18/11/12 21:16:30 INFO DAGScheduler: Job 10 finished: print at SSTest20181111.scala:83, took 0.005812 s
18/11/12 21:16:30 INFO JobScheduler: Finished job streaming job 1542028590000 ms.0 from job set of time 1542028590000 ms
18/11/12 21:16:30 INFO JobScheduler: Starting job streaming job 1542028590000 ms.1 from job set of time 1542028590000 ms
-------------------------------------------
Time: 1542028590000 ms
-------------------------------------------
(null,2018-11-12 21:16:24)
(null,2018-11-12 21:16:27)

18/11/12 21:16:30 INFO SparkContext: Starting job: print at SSTest20181111.scala:85
18/11/12 21:16:30 INFO DAGScheduler: Got job 11 (print at SSTest20181111.scala:85) with 1 output partitions
18/11/12 21:16:30 INFO DAGScheduler: Final stage: ResultStage 11 (print at SSTest20181111.scala:85)
18/11/12 21:16:30 INFO DAGScheduler: Parents of final stage: List()
18/11/12 21:16:30 INFO DAGScheduler: Missing parents: List()
18/11/12 21:16:30 INFO DAGScheduler: Submitting ResultStage 11 (MapPartitionsRDD[110] at map at SSTest20181111.scala:84), which has no missing parents
18/11/12 21:16:30 INFO MemoryStore: Block broadcast_11 stored as values in memory (estimated size 1648.0 B, free 2.4 GB)
18/11/12 21:16:30 INFO MemoryStore: Block broadcast_11_piece0 stored as bytes in memory (estimated size 1097.0 B, free 2.4 GB)
18/11/12 21:16:30 INFO BlockManagerInfo: Added broadcast_11_piece0 in memory on localhost:49533 (size: 1097.0 B, free: 2.4 GB)
18/11/12 21:16:30 INFO SparkContext: Created broadcast 11 from broadcast at DAGScheduler.scala:1006
18/11/12 21:16:30 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 11 (MapPartitionsRDD[110] at map at SSTest20181111.scala:84)
18/11/12 21:16:30 INFO TaskSchedulerImpl: Adding task set 11.0 with 1 tasks
18/11/12 21:16:30 INFO TaskSetManager: Starting task 0.0 in stage 11.0 (TID 13, localhost, partition 0,NODE_LOCAL, 1936 bytes)
18/11/12 21:16:30 INFO Executor: Running task 0.0 in stage 11.0 (TID 13)
18/11/12 21:16:30 INFO BlockManager: Found block input-0-1542028586200 locally
18/11/12 21:16:30 INFO Executor: Finished task 0.0 in stage 11.0 (TID 13). 937 bytes result sent to driver
18/11/12 21:16:30 INFO TaskSetManager: Finished task 0.0 in stage 11.0 (TID 13) in 2 ms on localhost (1/1)
18/11/12 21:16:30 INFO TaskSchedulerImpl: Removed TaskSet 11.0, whose tasks have all completed, from pool 
18/11/12 21:16:30 INFO DAGScheduler: ResultStage 11 (print at SSTest20181111.scala:85) finished in 0.002 s
18/11/12 21:16:30 INFO DAGScheduler: Job 11 finished: print at SSTest20181111.scala:85, took 0.008429 s
18/11/12 21:16:30 INFO SparkContext: Starting job: print at SSTest20181111.scala:85
18/11/12 21:16:30 INFO DAGScheduler: Got job 12 (print at SSTest20181111.scala:85) with 1 output partitions
18/11/12 21:16:30 INFO DAGScheduler: Final stage: ResultStage 12 (print at SSTest20181111.scala:85)
18/11/12 21:16:30 INFO DAGScheduler: Parents of final stage: List()
18/11/12 21:16:30 INFO DAGScheduler: Missing parents: List()
18/11/12 21:16:30 INFO DAGScheduler: Submitting ResultStage 12 (MapPartitionsRDD[110] at map at SSTest20181111.scala:84), which has no missing parents
18/11/12 21:16:30 INFO MemoryStore: Block broadcast_12 stored as values in memory (estimated size 1648.0 B, free 2.4 GB)
18/11/12 21:16:30 INFO MemoryStore: Block broadcast_12_piece0 stored as bytes in memory (estimated size 1097.0 B, free 2.4 GB)
18/11/12 21:16:30 INFO BlockManagerInfo: Added broadcast_12_piece0 in memory on localhost:49533 (size: 1097.0 B, free: 2.4 GB)
18/11/12 21:16:30 INFO SparkContext: Created broadcast 12 from broadcast at DAGScheduler.scala:1006
18/11/12 21:16:30 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 12 (MapPartitionsRDD[110] at map at SSTest20181111.scala:84)
18/11/12 21:16:30 INFO TaskSchedulerImpl: Adding task set 12.0 with 1 tasks
18/11/12 21:16:30 INFO TaskSetManager: Starting task 0.0 in stage 12.0 (TID 14, localhost, partition 1,NODE_LOCAL, 1936 bytes)
18/11/12 21:16:30 INFO Executor: Running task 0.0 in stage 12.0 (TID 14)
18/11/12 21:16:30 INFO BlockManager: Found block input-0-1542028588800 locally
18/11/12 21:16:30 INFO Executor: Finished task 0.0 in stage 12.0 (TID 14). 937 bytes result sent to driver
18/11/12 21:16:30 INFO TaskSetManager: Finished task 0.0 in stage 12.0 (TID 14) in 2 ms on localhost (1/1)
18/11/12 21:16:30 INFO TaskSchedulerImpl: Removed TaskSet 12.0, whose tasks have all completed, from pool 
18/11/12 21:16:30 INFO DAGScheduler: ResultStage 12 (print at SSTest20181111.scala:85) finished in 0.002 s
18/11/12 21:16:30 INFO DAGScheduler: Job 12 finished: print at SSTest20181111.scala:85, took 0.006536 s
18/11/12 21:16:30 INFO JobScheduler: Finished job streaming job 1542028590000 ms.1 from job set of time 1542028590000 ms
-------------------------------------------
Time: 1542028590000 ms
-------------------------------------------
2018-11-12 21:16:24
2018-11-12 21:16:27
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;最后一个问题，kafkaStream.print() 这个代码 print 出来是一个 tuple，第一个是 null，而这个 kafkaStream 是一个 ReceiverInputDStream[(String, String)]，这个东西第一个是啥呢&lt;/p&gt;
&lt;p&gt;搜到这个文章，虽然没有直接回到问题，但是对 streaming 的数据接收流程写的非常详细， &lt;a class=&#34;link&#34; href=&#34;https://www.jianshu.com/p/3195fb3c4191&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://www.jianshu.com/p/3195fb3c4191&lt;/a&gt; 值得回头读， &lt;a class=&#34;link&#34; href=&#34;http://bit1129.iteye.com/blog/2184468&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://bit1129.iteye.com/blog/2184468&lt;/a&gt; 这里也探讨了 KafkaUtils.createStream 的一些细节，有提到打印问题，但是没有提到 null 的问题，搜了一圈，没看到，也就先跳过了，不是主要矛盾&lt;/p&gt;
&lt;p&gt;-&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;ndash;&lt;/p&gt;
&lt;p&gt;其他一些附录：&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.ibm.com/developerworks/cn/opensource/os-cn-spark-practice2/index.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://www.ibm.com/developerworks/cn/opensource/os-cn-spark-practice2/index.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.cnblogs.com/xlturing/p/6246538.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://www.cnblogs.com/xlturing/p/6246538.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://segmentfault.com/a/1190000012730949&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://segmentfault.com/a/1190000012730949&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://colobu.com/2015/01/05/kafka-spark-streaming-integration-summary/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://colobu.com/2015/01/05/kafka-spark-streaming-integration-summary/&lt;/a&gt;&lt;/p&gt;
</description>
        </item>
        <item>
        <title>spark 写 gp/tpg 效率优化 —— 写入 237w 行数据耗时从 77 分钟到 34 秒</title>
        <link>https://blog.zrj.me/posts/2017-07-27-spark-%E5%86%99-gptpg-%E6%95%88%E7%8E%87%E4%BC%98%E5%8C%96-%E5%86%99%E5%85%A5-237w-%E8%A1%8C%E6%95%B0%E6%8D%AE%E8%80%97%E6%97%B6%E4%BB%8E-77-%E5%88%86%E9%92%9F%E5%88%B0-34/</link>
        <pubDate>Thu, 27 Jul 2017 09:38:06 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2017-07-27-spark-%E5%86%99-gptpg-%E6%95%88%E7%8E%87%E4%BC%98%E5%8C%96-%E5%86%99%E5%85%A5-237w-%E8%A1%8C%E6%95%B0%E6%8D%AE%E8%80%97%E6%97%B6%E4%BB%8E-77-%E5%88%86%E9%92%9F%E5%88%B0-34/</guid>
        <description>&lt;p&gt;摘自内部分享，有删减。&lt;/p&gt;
&lt;p&gt;具体到我们这次的场景中，我们用的是 gp，gp 全称是 greenplum，是一个 mpp 版本的 postgresql，可以参考这个简介 &lt;a class=&#34;link&#34; href=&#34;http://www.infoq.com/cn/news/2015/11/PostgreSQL-Pivotal&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://www.infoq.com/cn/news/2015/11/PostgreSQL-Pivotal&lt;/a&gt; ，协议上兼容 postgresql，我们可以用普通能连 postgresql 的方式去连 gp，并且把 gp 看成一个黑盒的集群版本的 postgresql 来使用&lt;/p&gt;
&lt;p&gt;然后这次的优化的手段也很简单，就是从原来的 jdbc 连接拼 sql 改成用 org.postgresql.copy.CopyManager，类似 postgresql 命令行下的 \copy 命令，所以一句话就能说完，而写这个文章的点主要是分享一下这个过程中的一些思路历程和细节&lt;/p&gt;
&lt;p&gt;对比图&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;//zrj.me/wp-content/uploads/2017/07/1501056179_100_w367_h360.png&#34; &gt;&lt;img src=&#34;https://blog.zrj.me/images/1501056179_100_w367_h360.png&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;1501056179\_100\_w367\_h360&#34;
	
	
&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;作为对比，我们原先的数据写入方式是 jdbc 连上之后拼 insert 语句，应该说这种方式在 OLTP 场景下是很适用的，但是在 OLAP 场景下效率问题就开始显现出来了，耗时不仅仅产生在写入端拼 query string 的开销上，更重的是在 db server 端去 parse query 的耗时成本，以及附带衍生的事务，回滚日志等开销成本。&lt;/p&gt;
&lt;p&gt;那么 gp 作为一个立足于大量数据处理的 RDBMS，肯定要对数据的 IO 有一个解决方案的，官方是怎么来解决这个问题的呢，看到这里，&lt;a class=&#34;link&#34; href=&#34;https://gpdb.docs.pivotal.io/4320/admin_guide/load.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://gpdb.docs.pivotal.io/4320/admin_guide/load.html&lt;/a&gt; ，官方主要提供了几种方案：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://gpdb.docs.pivotal.io/4320/admin_guide/load.html#topic3&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;External Tables&lt;/a&gt; enable accessing external files as if they are regular database tables.&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://gpdb.docs.pivotal.io/4320/admin_guide/load.html#topic4&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;gpload&lt;/a&gt; provides an interface to the Greenplum Database parallel loader.&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://gpdb.docs.pivotal.io/4320/admin_guide/load.html#topic5&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;COPY&lt;/a&gt; is the standard PostgreSQL non-parallel data loading tool.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;其中，外部表的方式可以通过以下几种实现达成&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;gpfdist: points to a directory on the file host and serves external data files to all Greenplum Database segments in parallel.&lt;/li&gt;
&lt;li&gt;gpfdists: the secure version of gpfdist.&lt;/li&gt;
&lt;li&gt;file:// accesses external data files on a segment host that the Greenplum superuser (gpadmin) can access.&lt;/li&gt;
&lt;li&gt;gphdfs: accesses files on a Hadoop Distributed File System (HDFS).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;gpfdist 可以把一个外部机器上的数据文件让所有 seg 节点能访问到，因而就可以并行的载入数据，gpfdists 是一个安全版本的 gpfdist，但是这种方式存在一些问题考量：&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;需要在 seg 节点上额外安装部署程序&lt;/li&gt;
&lt;li&gt;不兼容 tpg（tpg 本身就没有 seg 节点的概念，囧）&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;所以没有选用，而 gphdfs 这种方式，能够让 gp 连上 hdfs 去读数据，也是并行的，但是出于以下考量也最终没有采用&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;我们的 tdw 的 hdfs 带了自定义的鉴权&lt;/li&gt;
&lt;li&gt;我们在 hive 表中的存储格式并不是平坦的二维表，由于指标的值稀疏，我们使用的是类似 postgresql 的 hstore 的存储格式，而这种形式并不利于直接表对表的拷到 gp 中成为一张平坦的表&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;而 gpload 其实就是一个外部表的载入界面封装&lt;/p&gt;
&lt;p&gt;The gpload data loading utility is the interface to Greenplum’s external table parallel loading feature.&lt;/p&gt;
&lt;p&gt;所以最终我们的选择就落在了 copy 上，用 copy 的好处主要是&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;是 postgresql 的标准工具，无缝兼容 gp 与 tpg，一次干活到处使用&lt;/li&gt;
&lt;li&gt;不需要额外的依赖与安装部署，对目标 server 没有特殊要求&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;虽然，在官方的介绍中说了 copy 是一个非并行的工具，但是，实测下来，copy 的效率并不低&lt;/p&gt;
&lt;p&gt;用 copy 有两种方式，一种是在命令行上用，参考 &lt;a class=&#34;link&#34; href=&#34;https://www.postgresql.org/docs/9.2/static/sql-copy.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://www.postgresql.org/docs/9.2/static/sql-copy.html&lt;/a&gt; ，另外一种，是引入 jar 包，在代码中用，参考 &lt;a class=&#34;link&#34; href=&#34;https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/copy/CopyManager.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/copy/CopyManager.html&lt;/a&gt; ，可以看到他在函数的 Javadoc 上说了&lt;/p&gt;
&lt;p&gt;Use COPY FROM STDIN for very fast copying from an InputStream into a database table.&lt;/p&gt;
&lt;p&gt;这也说明这个工具的作者是很自信的（笑）&lt;/p&gt;
&lt;p&gt;可以看到这个函数有两个重载&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;public long copyIn(String sql,
                   Reader from)
            throws SQLException,
                   IOException
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;和&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;public long copyIn(String sql,
                   InputStream from)
            throws SQLException,
                   IOException
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;那么自然的问题就是，从 Reader 中读取和从 InputStream 中读取有什么区别？&lt;/p&gt;
&lt;p&gt;参考这个文章， &lt;a class=&#34;link&#34; href=&#34;http://blog.sina.com.cn/s/blog_6d3183b50101cri5.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://blog.sina.com.cn/s/blog_6d3183b50101cri5.html&lt;/a&gt; ，其实，二者的主要区别在于：&lt;/p&gt;
&lt;p&gt;InputStream提供的是字节流的读取，而非文本读取，这是和Reader类的根本区别。 即用Reader读取出来的是char数组或者String ，使用InputStream读取出来的是byte数组。&lt;/p&gt;
&lt;p&gt;也就是说，他们一个是面向字节的，一个是面向字符的，而面向字符的自然就要面临一个问题就是字符的编码方式的选择问题，以及解码和编码的开销成本问题，所以从效率上来说，我们应该是选用面向字节的方式&lt;/p&gt;
&lt;p&gt;去看他的源码实现也可以发现&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;    public long copyIn(final String sql, InputStream from, int bufferSize) throws SQLException, IOException {
        byte[] buf = new byte[bufferSize];
        int len;
        CopyIn cp = copyIn(sql);
        try {
            while( (len = from.read(buf)) &amp;gt; 0 ) {
                cp.writeToCopy(buf, 0, len);
            }
            return cp.endCopy();
        } finally { // see to it that we do not leave the connection locked
            if(cp.isActive())
                cp.cancelCopy();
        }
    }
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;对于 InputStream 的方式，直接读进来就可以用了&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;    public long copyIn(final String sql, Reader from, int bufferSize) throws SQLException, IOException {
        char[] cbuf = new char[bufferSize];
        int len;
        CopyIn cp = copyIn(sql);
        try {
            while ( (len = from.read(cbuf)) &amp;gt; 0) {
                byte[] buf = encoding.encode(new String(cbuf, 0, len));
                cp.writeToCopy(buf, 0, buf.length);
            }
            return cp.endCopy();
        } finally { // see to it that we do not leave the connection locked
            if(cp.isActive())
                cp.cancelCopy();
        }
    }
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;而对于 Reader 的方式，读进来之后还要 encoding 处理一下，进一步验证了我们的想法&lt;/p&gt;
&lt;p&gt;接下来，就可以得出一个使用方式的 demo 代码了，我们的根本需求是把计算结果写入，所以应该是 RDD[T] ，但是，由于需要转成一个 InputStream，所以我们需要转而接受一个 Array[Array[String]] 的入参&lt;/p&gt;
&lt;p&gt;所以到目前为止可以得出代码如下&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;  def copyIn(data: Array[Array[String]], tblName: String): Long = {
    var con: Connection = null
    try {      
      Class.forName(&amp;#34;org.postgresql.Driver&amp;#34;)
      println(&amp;#34;connecting to database with url &amp;#34; + url)
      con = DriverManager.getConnection(url, user, password)
      val cm = new CopyManager(con.asInstanceOf[BaseConnection])
      val COPY_CMD = s&amp;#34;COPY $tblName from STDIN&amp;#34;
      val start = System.currentTimeMillis()
      val affectedRowCount = cm.copyIn(COPY_CMD, genInputStream(data))
      val finish = System.currentTimeMillis()
      println(&amp;#34;copy operation completed successfully in &amp;#34; + (finish-start)/1000.0 + &amp;#34; seconds, affectedRowCount &amp;#34; + affectedRowCount)
      con.close()
      affectedRowCount
    } catch {
      case ex: SQLException =&amp;gt; println(&amp;#34;Failed to copy data: &amp;#34; + ex.getMessage()); 0
    } finally {
      try {
        if (con != null) con.close()
      } catch {
        case ex: SQLException =&amp;gt; println(ex.getMessage())
      }
    }
  }
  
  def genInputStream(arr: Array[Array[String]]): InputStream = {    
    val stringBuilder = new StringBuilder
    println(&amp;#34;input data has &amp;#34; + arr.length + &amp;#34; rows&amp;#34;)
    if (arr.length != 0) {
      val rowcount = arr.length;
      val columncount = arr(0).length
      for (i &amp;lt;- 0 to rowcount-1; j &amp;lt;- 0 to columncount-1) {
        stringBuilder.append(arr(i)(j) + (if (j == columncount-1) &amp;#34;\r\n&amp;#34; else &amp;#34;\t&amp;#34;))
      }
    }
    val str = stringBuilder.toString
    println(&amp;#34;input data &amp;#34; + arr.length + &amp;#34; rows total &amp;#34; + str.length + &amp;#34; bytes&amp;#34;)
    new ByteArrayInputStream(str.getBytes(StandardCharsets.UTF_8))
  }
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;以上这个代码能够正常跑出文章开头的性能测试的结果，但是很显然，埋了一个大坑：会爆内存&lt;/p&gt;
&lt;p&gt;且不说为了能够满足 API 的要求，我们需要把输入数据组织成一个 Array[Array[String]] 并拉到 driver 节点（这一步骤本身就违背了 driver 节点不干具体活的宗旨），就说我们把 RDD collect 到 driver 之后，还需要转变出一个 InputStream 这种形式，中途还需要通过 StringBuilder 去 Build 一个大 string，这何止是奢侈，简直就是奢侈，Array[Array[String]] 占一份内存，toString 之后又占一份内存&lt;/p&gt;
&lt;p&gt;于是尝试使用 PipedOutputStream 和 PipedInputStream 来解决，这是一个基于管道的流式读写，我们可以起一个单独的线程，来往这个 PipedOutputStream 写入数据，由于缓冲区大小有限，他就会阻塞在缓冲区满的状态下，然后读取端从 PipedInputStream 去读，一边读一边写入到网络上去，jvm 顿时轻松很多，但是，动手之前，有一个问题是，怎么来确认我们的这些改动是真的有效呢，由此需要引入 java 运行时的内存监控工具&lt;/p&gt;
&lt;p&gt;我们可以发现，在安装 jdk 的时候，附送了这么一个东西&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;//zrj.me/wp-content/uploads/2017/07/1501063231_39_w214_h48.png&#34; &gt;&lt;img src=&#34;https://blog.zrj.me/images/1501063231_39_w214_h48.png&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;1501063231\_39\_w214\_h48&#34;
	
	
&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;他其实是一个运行时的监控，简单的 CPU 内存监控可以不需要说明书就能上手&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;//zrj.me/wp-content/uploads/2017/07/1501063311_9_w929_h882.png&#34; &gt;&lt;img src=&#34;https://blog.zrj.me/images/1501063311_9_w929_h882.png&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;1501063311\_9\_w929\_h882&#34;
	
	
&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;他可以绘制出 jvm 运行时的 cpu 和内存曲线图，并带有仪表盘&lt;/p&gt;
&lt;p&gt;另外，我们也通过 runtime 来获取使用的内存，参考这里， &lt;a class=&#34;link&#34; href=&#34;http://viralpatel.net/blogs/getting-jvm-heap-size-used-memory-total-memory-using-java-runtime/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://viralpatel.net/blogs/getting-jvm-heap-size-used-memory-total-memory-using-java-runtime/&lt;/a&gt;  可以加入打印函数如下&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;  def printMem(currentMoment: String) {
    println(s&amp;#34;=====$currentMoment=========&amp;#34;)
    val mb = 1024*1024
    val runtime = Runtime.getRuntime()
    println(&amp;#34;Used Memory:&amp;#34; + (runtime.totalMemory() - runtime.freeMemory()) / mb)
    println(&amp;#34;Free Memory:&amp;#34; + runtime.freeMemory() / mb)
    println(&amp;#34;Total Memory:&amp;#34; + runtime.totalMemory() / mb)
    println(&amp;#34;===============&amp;#34;)
  }
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;然后在原有的函数上打点&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;  def genInputStream(arr: Array[Array[String]]): InputStream = {    
    printMem(&amp;#34;before gen string&amp;#34;)
    val stringBuilder = new StringBuilder
    println(&amp;#34;input data has &amp;#34; + arr.length + &amp;#34; rows&amp;#34;)
    if (arr.length != 0) {
      val rowcount = arr.length;
      val columncount = arr(0).length
      for (i &amp;lt;- 0 to rowcount-1; j &amp;lt;- 0 to columncount-1) {
        stringBuilder.append(arr(i)(j) + (if (j == columncount-1) &amp;#34;\r\n&amp;#34; else &amp;#34;\t&amp;#34;))
      }
    }
    val str = stringBuilder.toString
    printMem(&amp;#34;after gen string&amp;#34;)
    println(&amp;#34;input data &amp;#34; + arr.length + &amp;#34; rows total &amp;#34; + str.length + &amp;#34; bytes&amp;#34;)
    new ByteArrayInputStream(str.getBytes(StandardCharsets.UTF_8))
  }
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;并生成 1kw 行测试数据&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;  def main(args: Array[String]): Unit = {
    //var data = Array(Array(&amp;#34;P1&amp;#34;,&amp;#34;PenDrive&amp;#34;,&amp;#34;50&amp;#34;,&amp;#34;US&amp;#34;), Array(&amp;#34;P1&amp;#34;,&amp;#34;PenDrive&amp;#34;,&amp;#34;300&amp;#34;,&amp;#34;US&amp;#34;))
    printMem(&amp;#34;before gen array&amp;#34;)
    val data = Array.fill(100*10000*10)(Array(&amp;#34;P1&amp;#34;,&amp;#34;PenDrive&amp;#34;,&amp;#34;50&amp;#34;,&amp;#34;US&amp;#34;))
    printMem(&amp;#34;after gen array&amp;#34;)
    copyIn(data, &amp;#34;test.product&amp;#34;)
  }
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;我们使用一个测试用的表，直接在 eclipse 中跑一下，可以得到输出如下&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=====before gen array=========
Used Memory:3
Free Memory:241
Total Memory:245
===============
=====after gen array=========
Used Memory:345
Free Memory:290
Total Memory:635
===============
=====before gen string=========
Used Memory:352
Free Memory:305
Total Memory:658
===============
input data has 10000000 rows
=====after gen string=========
Used Memory:1989
Free Memory:479
Total Memory:2469
===============
input data 10000000 rows total 190000000 bytes
copy operation completed successfully in 69.951 seconds, affectedRowCount 10000000
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;可以看到，通过 stringbuilder 来生成 inputstream 的方式，耗用的内存，远比一倍要多&lt;/p&gt;
&lt;p&gt;那么接下来就可以尝试改成 PipedOutputStream 和 PipedInputStream 的方式了&lt;/p&gt;
&lt;p&gt;把生成 InputStream 的类改成如下&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;  def genPipedInputStream(arr: Array[Array[String]]): InputStream = {
    printMem(&amp;#34;before gen inputstream&amp;#34;)
    val out = new PipedOutputStream
    (new Thread(){
      override def run {
        println(&amp;#34;input data has &amp;#34; + arr.length + &amp;#34; rows&amp;#34;)
        if (arr.length != 0) {
          val rowcount = arr.length;
          val columncount = arr(0).length
          for (i &amp;lt;- 0 to rowcount-1; j &amp;lt;- 0 to columncount-1) {
            out.write((arr(i)(j) + (if (j == columncount-1) &amp;#34;\r\n&amp;#34; else &amp;#34;\t&amp;#34;)).getBytes(StandardCharsets.UTF_8))
          }
        }        
        out.close()
        println(&amp;#34;PipedOutputStream closed&amp;#34;)
      }
    }).start()
    val in = new PipedInputStream
    in.connect(out)
    printMem(&amp;#34;after gen inputstream&amp;#34;)
    in    
  }
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;（这里其实隐含了一个问题就是是否需要 CountDownLatch）&lt;/p&gt;
&lt;p&gt;可以看到输出如下，耗时有所增加，不过内存控制住了&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=====before gen array=========
Used Memory:3
Free Memory:241
Total Memory:245
===============
=====after gen array=========
Used Memory:345
Free Memory:295
Total Memory:641
===============
=====before gen inputstream=========
Used Memory:352
Free Memory:288
Total Memory:641
===============
=====after gen inputstream=========
Used Memory:352
Free Memory:286
input data has 10000000 rows
Total Memory:641
===============
PipedOutputStream closed
copy operation completed successfully in 97.917 seconds, affectedRowCount 10000000
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;并且过程中的内存曲线基本平稳&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;//zrj.me/wp-content/uploads/2017/07/1501069503_38_w929_h882.png&#34; &gt;&lt;img src=&#34;https://blog.zrj.me/images/1501069503_38_w929_h882.png&#34;
	
	
	
	loading=&#34;lazy&#34;
	
		alt=&#34;1501069503\_38\_w929\_h882&#34;
	
	
&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;接下来，浮现出来的一个问题就是：是否真的需要把 RDD[T] collect 到 driver 上来？&lt;/p&gt;
&lt;p&gt;答案其实是可以不需要，我们有 mapPartitions 这个算子，可以写成如下&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;val start = System.currentTimeMillis()
dataGpFlatten.mapPartitions(x =&amp;gt; {
  GPCopyMgr.copyIn(x.toArray, &amp;#34;xxxxx&amp;#34;)
  x
}).count
val finish = System.currentTimeMillis()
println(&amp;#34;operation completed successfully in &amp;#34; + (finish-start)/1000.0 + &amp;#34; seconds&amp;#34;)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;需要注意的是，mapPartitions 并不是 action，而是一个 transform，所以我们需要在后面给他跟上一个 action，例如 count，来触发执行&lt;/p&gt;
&lt;p&gt;主节点再无写入数据的动作，并且总的耗时比文章开头的耗时还要下降了 5s，不过基本在一个量级，可以认为是实验误差范围内&lt;/p&gt;
&lt;p&gt;通过这种 mapPartitions 的方式，需要注意的问题有&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;partition 数量的选择，过多容易造成同时连接 db 的连接数过多，而且每个分区小了，其实吞吐性能不利&lt;/li&gt;
&lt;li&gt;如果需要 re-partition，需要意识到 re-partition 也是有开销成本的&lt;/li&gt;
&lt;li&gt;最后别忘了跟一个 action&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;至此，基本就完结了，剩下就是一些工程化方面的工作，例如&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;在写入数据之前删除分区，以避免脏数据&lt;/li&gt;
&lt;li&gt;在写入数据之后校验写入行数是否相符，以免某个 partition 写的过程中出异常了（这里其实引申出来一个问题，如果某个 executor 在写到一半的时候挂了，怎么办，是否只能整个 lz 任务重跑来清理现场？）&lt;/li&gt;
&lt;li&gt;加强日志的可读性&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;以上动作都是工程化方面的工作，其实还是避免自己给自己挖坑，哈哈&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;历史评论&#34;&gt;历史评论
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;郭祥汝&lt;/strong&gt; (2019-08-24 12:44:03):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;你好，我有两个表，同样是200万行数据，第一张表一条数据不大，第二张表每条数据很大，打一张表copy写入很快，第二张很慢，而且爆出 read and dead错误，请问这是什么问题?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;ZRJ&lt;/strong&gt; (2019-08-29 10:19:15):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;这个错误我倒是没有遇到过，搜了下貌似也没有什么资料，有详细的上下文和错误提示吗&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;郭祥汝&lt;/strong&gt; (2019-08-30 17:57:23):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;那个表中，每一条数据都是那种大文本数据，这种用copy方式写入数据，很慢，也能写入，时间消耗很长，300万条数据，大概消耗7分钟左右。如果表中不包含大文本数据，300万条30多秒，这个差别很大啊，你有没有其他的建议，谢谢！&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;ZRJ&lt;/strong&gt; (2019-09-01 11:42:46):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;看下大文本的体积呢，这些大文本每行多少个 byte？同时看下写入时，db 机器的磁盘 IO 和网卡 IO&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;claude&lt;/strong&gt; (2026-03-13 14:42:12):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;好文章，从 JDBC insert 到 COPY 协议，思路清晰，方案选型也很务实。几个点想交流一下：&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;关于 PipedInputStream/PipedOutputStream 的方案，这个设计本质上是把「数据序列化」和「网络传输」两个阶段从串行变成了流水线并行，用内存换时间的经典思路。不过 PipedStream 在 JVM 中有个已知的坑——默认 buffer 只有 1024 bytes，如果生产端和消费端速率差异大容易触发频繁的 wait/notify，可以考虑在构造时传一个更大的 bufferSize（比如 64KB），或者干脆用 java.nio 的 Pipe.open() 替代，吞吐更好。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;你提到的 executor 写到一半挂掉的问题，其实是分布式写入中很经典的 exactly-once 难题。一个可行的工程方案是：每个 partition 先写到一张带 batch_id 的临时表（或者用 GP 的临时表），全部 partition 成功后再做一次 INSERT INTO &amp;hellip; SELECT 或者 ALTER TABLE EXCHANGE PARTITION 来原子性地「翻牌」。这样任何一个 partition 失败都可以安全重试，不用整个任务重跑。&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;另外，既然用了 mapPartitions，其实可以更进一步——在 mapPartitions 内部直接构造 PipedOutputStream 往 CopyManager 灌数据，连 Array[Array[String]] 的中间物化都省掉了，iterator 直接流式写入，内存开销可以降到几乎为零。&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
</description>
        </item>
        <item>
        <title>spark 中的日志</title>
        <link>https://blog.zrj.me/posts/2017-03-02-spark-%E4%B8%AD%E7%9A%84%E6%97%A5%E5%BF%97/</link>
        <pubDate>Thu, 02 Mar 2017 21:48:56 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2017-03-02-spark-%E4%B8%AD%E7%9A%84%E6%97%A5%E5%BF%97/</guid>
        <description>&lt;p&gt;在打包一个 spark streaming 工程到 yarn 上跑的时候，发现自己的 log4j.properties 没有被读取&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j: Trying to find [log4j.xml] using context classloader org.apache.spark.util.MutableURLClassLoader@70ad2036.
log4j: Trying to find [log4j.xml] using sun.misc.Launcher$AppClassLoader@5561bfa3 class loader.
log4j: Trying to find [log4j.xml] using ClassLoader.getSystemResource().
log4j: Trying to find [log4j.properties] using context classloader org.apache.spark.util.MutableURLClassLoader@70ad2036.
log4j: Using URL [file:/etc/spark/conf.cloudera.spark_on_yarn/log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/etc/spark/conf.cloudera.spark_on_yarn/log4j.properties
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;而其实我的 classpath 中是有一个 log4j.properties 文件的&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/31856532/spark-unable-to-load-custom-log4j-properties-from-fat-jar-resources&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/31856532/spark-unable-to-load-custom-log4j-properties-from-fat-jar-resources&lt;/a&gt; 这里有人遇到同样的问题，答案说，&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In 1.4.1, MutableURLClassLoader is not set before log4j initialization like it is in 1.3.1.&lt;/p&gt;
&lt;p&gt;Here is the explanation:&lt;/p&gt;
&lt;p&gt;While parsing arguments in in SparkSubmit.scala, it uses spark.util.Utils. This object has a new static dependency on log4j, through ShutdownHookManager, that triggers it&amp;rsquo;s initialization before the call to setContextClassLoader(MutableURLClassLoader) from submit &amp;gt; doRunMain &amp;gt; runMain&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;同时给出一个 issue 地址， &lt;a class=&#34;link&#34; href=&#34;https://issues.apache.org/jira/browse/SPARK-9826&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://issues.apache.org/jira/browse/SPARK-9826&lt;/a&gt; 看下来就是说 classpath 的优先级被抢了，那么试试在 spark-submit 脚本中的 &amp;ndash;jars 第一个贴上自己的 jar 包，然而发现这并没有什么卵用&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://apache-spark-user-list.1001560.n3.nabble.com/log4j-xml-bundled-in-jar-vs-log4-properties-in-spark-conf-tt23923.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://apache-spark-user-list.1001560.n3.nabble.com/log4j-xml-bundled-in-jar-vs-log4-properties-in-spark-conf-tt23923.html&lt;/a&gt; 这里有另外一个帖子，说是在 by adding it to SPARK_CLASSPATH in spark-env.sh 就可以，于是想到在 spark-submit 中配应该也可以，于是在加上这个 &amp;ndash;driver-class-path &amp;ldquo;xxx.jar&amp;rdquo; 发现在 driver 节点是可以了，但是在 executor 节点依然不行&lt;/p&gt;
&lt;p&gt;先不管这个，但是发现在算子函数中，使用 logger 报错无法序列化，又是老问题，看到这里， &lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/29208844/apache-spark-logging-within-scala&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/29208844/apache-spark-logging-within-scala&lt;/a&gt;&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-javascript&#34; data-lang=&#34;javascript&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nx&#34;&gt;object&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Holder&lt;/span&gt; &lt;span class=&#34;kr&#34;&gt;extends&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Serializable&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;      
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;   &lt;span class=&#34;err&#34;&gt;@&lt;/span&gt;&lt;span class=&#34;kr&#34;&gt;transient&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;lazy&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;    
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;someRdd&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;spark&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;parallelize&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;List&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;2&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;3&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)).&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;foreach&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;element&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;   &lt;span class=&#34;nx&#34;&gt;Holder&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;element&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;这种方式确实可以，但是就丢失了来源信息，也就是说，每行日志都是归到这个 object 的名下的&lt;/p&gt;
&lt;p&gt;但是，为什么这种方式可以呢，transient 又是个什么鬼， &lt;a class=&#34;link&#34; href=&#34;http://www.cnblogs.com/lanxuezaipiao/p/3369962.html&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://www.cnblogs.com/lanxuezaipiao/p/3369962.html&lt;/a&gt; 有讨论，但是感觉说的是 Java 的关键字，而不是 scala 的修饰符，所以还有点问题， &lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/4772825/transient-lazy-val-field-serialization&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/4772825/transient-lazy-val-field-serialization&lt;/a&gt; 也有抛出问题，但是没有答案， &lt;a class=&#34;link&#34; href=&#34;http://fdahms.com/2015/10/14/scala-and-the-transient-lazy-val-pattern/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://fdahms.com/2015/10/14/scala-and-the-transient-lazy-val-pattern/&lt;/a&gt; 这里的讨论就差不多靠点边了&lt;/p&gt;
&lt;p&gt;首先，用 @transient lazy val 这种方式修饰的变量，是不被序列化的，而且，不需要，也不会被重复初始化，那么他的值从哪里来呢，我理解是从已经有的对象中来。也就是说，spark 本身开了一个到日志文件的 IO 流，我们通过这种方式，可以蹭他的用，不过，另外一种理解就是，虽然他没有被序列化，但是也并不是蹭 spark 的用，而是自己在每个 jvm 中，首次用到这个类的时候，自己开了一个 IO 流，具体的实现，没有看到明确的资料&lt;/p&gt;
&lt;p&gt;不过如果这种用外援的 object 方式可以的话，就会让人想到，那我用 class 中本来的变量，加上一个 @transient lazy 去修饰可不可以，很遗憾，答案是不可以&lt;/p&gt;
&lt;p&gt;那么，spark 半身的 logging 是怎么做的呢，看到这里， &lt;a class=&#34;link&#34; href=&#34;https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/internal/Logging.scala&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/internal/Logging.scala&lt;/a&gt; 为了避免代码版本变化，我拷贝一份&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-javascript&#34; data-lang=&#34;javascript&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt;/*
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * Licensed to the Apache Software Foundation (ASF) under one or more
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * contributor license agreements.  See the NOTICE file distributed with
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * this work for additional information regarding copyright ownership.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * The ASF licenses this file to You under the Apache License, Version 2.0
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * (the &amp;#34;License&amp;#34;); you may not use this file except in compliance with
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * the License.  You may obtain a copy of the License at
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; *
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; *    http://www.apache.org/licenses/LICENSE-2.0
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; *
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * Unless required by applicable law or agreed to in writing, software
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * distributed under the License is distributed on an &amp;#34;AS IS&amp;#34; BASIS,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * See the License for the specific language governing permissions and
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * limitations under the License.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; */&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;kr&#34;&gt;package&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;org&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;apache&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;spark&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;internal&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;kr&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;org&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;apache&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log4j&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.{&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Level&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;LogManager&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;PropertyConfigurator&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;kr&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;org&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;slf4j&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.{&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Logger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;LoggerFactory&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;kr&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;org&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;slf4j&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;impl&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;StaticLoggerBinder&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;kr&#34;&gt;import&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;org&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;apache&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;spark&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;util&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Utils&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt;/**
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * Utility trait for classes that want to log data. Creates a SLF4J logger for the class and allows
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * logging messages at different levels using methods that only evaluate parameters lazily if the
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * log level is enabled.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; */&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nx&#34;&gt;trait&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// Make the log field transient so that objects with Logging can
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// be serialized and used on another machine
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;err&#34;&gt;@&lt;/span&gt;&lt;span class=&#34;kr&#34;&gt;transient&lt;/span&gt; &lt;span class=&#34;kr&#34;&gt;private&lt;/span&gt; &lt;span class=&#34;kd&#34;&gt;var&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log_&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logger&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// Method to get the logger name for this object
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logName&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// Ignore trailing $&amp;#39;s in the class names for Scala objects
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;this&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;stripSuffix&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;$&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// Method to get or create the logger for this object
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logger&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log_&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;==&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nx&#34;&gt;initializeLogIfNecessary&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nx&#34;&gt;log_&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;LoggerFactory&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;logName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;log_&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// Log methods that take only a String
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logInfo&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInfoEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logDebug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isDebugEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;debug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logTrace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isTraceEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logWarning&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isWarnEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logError&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isErrorEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;error&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// Log methods that take Throwables (Exceptions/Errors) too
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logInfo&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInfoEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logDebug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isDebugEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;debug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logTrace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isTraceEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logWarning&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isWarnEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logError&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isErrorEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;error&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;isTraceEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;Boolean&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isTraceEnabled&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;initializeLogIfNecessary&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInterpreter&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;Boolean&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Unit&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;!&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;initialized&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;initLock&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;kr&#34;&gt;synchronized&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;!&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;initialized&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;          &lt;span class=&#34;nx&#34;&gt;initializeLogging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInterpreter&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;private&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;initializeLogging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInterpreter&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;Boolean&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Unit&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// Don&amp;#39;t use a logger in here, as this is itself occurring during initialization of a logger
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// If Log4j 1.2 is being used, but is not initialized, load a default properties file
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;binderClass&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;StaticLoggerBinder&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getSingleton&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLoggerFactoryClassStr&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// This distinguishes the log4j 1.2 binding, currently
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// org.slf4j.impl.Log4jLoggerFactory, from the log4j 2.0 binding, currently
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// org.apache.logging.slf4j.Log4jLoggerFactory
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;usingLog4j12&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;org.slf4j.impl.Log4jLoggerFactory&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;equals&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;binderClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;usingLog4j12&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log4j12Initialized&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;LogManager&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getRootLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getAllAppenders&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;hasMoreElements&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;c1&#34;&gt;// scalastyle:off println
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;!&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log4j12Initialized&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;defaultLogProps&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;org/apache/spark/log4j-defaults.properties&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;nx&#34;&gt;Option&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Utils&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getSparkClassLoader&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getResource&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;defaultLogProps&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;))&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;match&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;          &lt;span class=&#34;k&#34;&gt;case&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Some&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;            &lt;span class=&#34;nx&#34;&gt;PropertyConfigurator&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;configure&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;            &lt;span class=&#34;nx&#34;&gt;System&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;err&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;println&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;s&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;Using Spark&amp;#39;s default log4j profile: $defaultLogProps&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;          &lt;span class=&#34;k&#34;&gt;case&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;None&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;            &lt;span class=&#34;nx&#34;&gt;System&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;err&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;println&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;s&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;Spark was unable to load $defaultLogProps&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInterpreter&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;c1&#34;&gt;// Use the repl&amp;#39;s main class to define the default log level when running the shell,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;c1&#34;&gt;// overriding the root logger&amp;#39;s config if they&amp;#39;re different.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;rootLogger&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;LogManager&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getRootLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;replLogger&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;LogManager&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;logName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;replLevel&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Option&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;replLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLevel&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()).&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getOrElse&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Level&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;WARN&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;replLevel&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;!=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;rootLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getEffectiveLevel&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;())&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;          &lt;span class=&#34;nx&#34;&gt;System&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;err&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;printf&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;Setting default log level to \&amp;#34;%s\&amp;#34;.\n&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;replLevel&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;          &lt;span class=&#34;nx&#34;&gt;System&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;err&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;println&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;To adjust logging level use sc.setLogLevel(newLevel). &amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;+&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;            &lt;span class=&#34;s2&#34;&gt;&amp;#34;For SparkR, use setLogLevel(newLevel).&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;          &lt;span class=&#34;nx&#34;&gt;rootLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;setLevel&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;replLevel&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;c1&#34;&gt;// scalastyle:on println
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;initialized&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// Force a call into slf4j to initialize it. Avoids this happening from multiple threads
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// and triggering this: http://mailman.qos.ch/pipermail/slf4j-dev/2010-April/002956.html
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;kr&#34;&gt;private&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;object&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;err&#34;&gt;@&lt;/span&gt;&lt;span class=&#34;kr&#34;&gt;volatile&lt;/span&gt; &lt;span class=&#34;kr&#34;&gt;private&lt;/span&gt; &lt;span class=&#34;kd&#34;&gt;var&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;initialized&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;initLock&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;new&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;Object&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;k&#34;&gt;try&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// We use reflection here to handle the case where users remove the
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;c1&#34;&gt;// slf4j-to-jul bridge order to route their logs to JUL.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;bridgeClass&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Utils&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;classForName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;org.slf4j.bridge.SLF4JBridgeHandler&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;bridgeClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getMethod&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;removeHandlersForRootLogger&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;invoke&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;installed&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;bridgeClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getMethod&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;isInstalled&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;invoke&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;asInstanceOf&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;Boolean&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;!&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;installed&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nx&#34;&gt;bridgeClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getMethod&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;install&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;).&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;invoke&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;catch&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;case&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;e&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;ClassNotFoundException&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;c1&#34;&gt;// can&amp;#39;t log anything yet so just fail silently
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;可以看到是用了一个 trait，用法上，参考 &lt;a class=&#34;link&#34; href=&#34;https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala&lt;/a&gt; 的 class SparkContext(config: SparkConf) extends Logging 可以看到是用了 extends 的方式&lt;/p&gt;
&lt;p&gt;我们也照猫画虎看看，写了一个 trait，但是没有写对应的 object，依然报错无法序列化，看到这里 &lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/978252/logging-in-scala&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/978252/logging-in-scala&lt;/a&gt; 有一个例子&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-javascript&#34; data-lang=&#34;javascript&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nx&#34;&gt;trait&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Loggable&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;val&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logger&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Logger&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;this&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;checkFormat&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Seq&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Any&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;])&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;size&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;msgfmtSeq&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;else&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Any&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logger&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;checkFormat&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Any&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logger&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;checkFormat&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;),&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Any&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logger&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;checkFormat&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Any&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logger&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;checkFormat&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;),&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Any&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logger&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;checkFormat&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Any&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logger&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;checkFormat&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;),&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;critical&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Any&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logger&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;error&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;checkFormat&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;critical&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Any&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logger&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;error&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;checkFormat&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;refs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;),&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;t&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt;/**
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; * Note: implementation taken from scalax.logging API
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;cm&#34;&gt; */&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nx&#34;&gt;object&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;loggerNameForClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;className&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;className&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;endsWith&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;$&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;className&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;substring&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;className&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;length&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;mi&#34;&gt;1&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;        &lt;span class=&#34;k&#34;&gt;else&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;className&lt;/span&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;logging&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;AnyRef&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;LoggerFactory&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;loggerNameForClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;logging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;))&lt;/span&gt;  
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;一开始写成这样&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-javascript&#34; data-lang=&#34;javascript&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nx&#34;&gt;trait&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Loggable&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;err&#34;&gt;@&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;Transient&lt;/span&gt; &lt;span class=&#34;kr&#34;&gt;private&lt;/span&gt; &lt;span class=&#34;kd&#34;&gt;var&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log_&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logger&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logger&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log_&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;==&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;      &lt;span class=&#34;nx&#34;&gt;log_&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;LoggerFactory&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;this&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;log_&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// Log methods that take only a String
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logInfo&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInfoEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logDebug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isDebugEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;debug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logTrace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isTraceEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logWarning&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isWarnEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logError&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isErrorEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;error&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// Log methods that take Throwables (Exceptions/Errors) too
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logInfo&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInfoEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logDebug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isDebugEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;debug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logTrace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isTraceEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logWarning&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isWarnEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logError&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isErrorEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;error&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;说无法序列化，以为是没有用 object，于是改成这样&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-javascript&#34; data-lang=&#34;javascript&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nx&#34;&gt;trait&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Loggable&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;private&lt;/span&gt; &lt;span class=&#34;kd&#34;&gt;var&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logger&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;this&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// Log methods that take only a String
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logInfo&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInfoEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logDebug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isDebugEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;debug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logTrace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isTraceEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logWarning&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isWarnEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logError&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isErrorEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;error&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;c1&#34;&gt;// Log methods that take Throwables (Exceptions/Errors) too
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logInfo&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isInfoEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;info&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logDebug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isDebugEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;debug&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logTrace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isTraceEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;trace&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logWarning&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isWarnEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;warn&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;kr&#34;&gt;protected&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;logError&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;String&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;isErrorEnabled&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;log&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;error&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;msg&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;throwable&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nx&#34;&gt;object&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;Logging&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;nx&#34;&gt;def&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;logging&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nx&#34;&gt;AnyRef&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nx&#34;&gt;LoggerFactory&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getLogger&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;logging&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;nx&#34;&gt;getName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;结果还是不行，于是索性直接照搬他的那个实现，果然也是跪&lt;/p&gt;
</description>
        </item>
        <item>
        <title>zeppelin 搭建 spark sql context</title>
        <link>https://blog.zrj.me/posts/2016-12-12-zeppelin-%E6%90%AD%E5%BB%BA-spark-sql-context/</link>
        <pubDate>Mon, 12 Dec 2016 23:37:48 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2016-12-12-zeppelin-%E6%90%AD%E5%BB%BA-spark-sql-context/</guid>
        <description>&lt;p&gt;zeppelin 上跑 spark sql 有两种方式，一种是 spark 启动一个 thrift server，然后对外提供一个 jdbc 服务，zeppelin 通过 jdbc 的方式，连上 spark thrift server，提交 sql，等待返回，这种方式听上去很美好，毕竟实现了前后端解耦，但是实际使用中发现，spark thrift server 这个东西不够成熟，如果长时间持有一个 spark context 在 yarn 上的话，可能会僵死&lt;/p&gt;
&lt;p&gt;于是另外一种方式（其实反而是 zeppelin 的默认方式），就是让 zeppelin 自行启动一个 spark context，注册在 yarn 上，然后自行管理这个 yarn-client 的内存和 CPU 等信息，下面就开始尝试这种方式的搭建&lt;/p&gt;
&lt;p&gt;在搭建 zeppelin 的时候发现，在页面上使用 sc.version 等方式测试 spark context 的时候，zeppelin 的后台日志报错&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;ERROR [2016-12-12 17:07:44,853] ({pool-2-thread-2} Job.java[run]:189) - Job failed
java.lang.NoSuchMethodError: scala.reflect.api.JavaUniverse.runtimeMirror(Ljava/lang/ClassLoader;)Lscala/reflect/api/JavaMirrors$JavaMirror;
	at org.apache.spark.repl.SparkILoop.&amp;lt;init&amp;gt;(SparkILoop.scala:936)
	at org.apache.spark.repl.SparkILoop.&amp;lt;init&amp;gt;(SparkILoop.scala:70)
	at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:765)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
	at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
	at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;从日志看，猜测认为是 scala 环境本身有问题，那么思路就是先抛开 zeppelin，测试在命令行下用 spark 终端是否正常支持 java, scala, python, sql 几种语法。&lt;/p&gt;
&lt;p&gt;然而，在命令行下，发现 spark-shell 是能够正常的用&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;scala&amp;gt; sc.version
res0: String = 1.6.0

scala&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;这说明我们环境的 scala 运行时起码是没有问题的&lt;/p&gt;
&lt;p&gt;那么尝试删除 zeppelin 并重新从压缩包解压缩，发现这个压缩包其实是从另外一个机器上打包过来的，连已有的 notebook 都拷贝过来了&lt;/p&gt;
&lt;p&gt;根据官网，0.6.2 版本的压缩包，解压缩之后直接就可以启动了，不需要去把 conf 的模板文件改名，但是，如果需要配置 spark，那么需要配置 SPARK_HOME，然后，坑就来了，直接启动服务，在编辑框使用 sc.version 测试，会报一个 ERROR，但是后台不出日志，这就让人抓狂了，我看步骤有一个在 Interpreter menu 中配置 spark master 为 yarn-client 的步骤，但是我看页面上已经是 yarn-client 了，于是就没有动，然后日志就出不来，后来无聊之中把这里改了，然后在页面上 save 一次，再重启服务，发现有错误日志了&lt;/p&gt;
&lt;p&gt;结果还是同样的错误&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;ERROR [2016-12-12 18:22:07,626] ({pool-2-thread-4} Job.java[run]:189) - Job failed
java.lang.NoSuchMethodError: scala.reflect.api.JavaUniverse.runtimeMirror(Ljava/lang/ClassLoader;)Lscala/reflect/api/JavaMirrors$JavaMirror;
	at org.apache.spark.repl.SparkILoop.&amp;lt;init&amp;gt;(SparkILoop.scala:936)
	at org.apache.spark.repl.SparkILoop.&amp;lt;init&amp;gt;(SparkILoop.scala:70)
	at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:765)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
	at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
	at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;于是尝试命令行下跑 pyspark，试图绕过这个 scala 的问题&lt;/p&gt;
&lt;p&gt;发现命令行下 pyspark 可以跑&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; sc.version
u&amp;#39;1.6.0&amp;#39;
&amp;gt;&amp;gt;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;于是尝试 %spark.pyspark，发现同样报错，所以现在的问题就是，命令行下能够正常使用 spark-shell 和 pyspark，但是 zeppelin 中不行，报错 scala.reflect.api.JavaUniverse.runtimeMirror&lt;/p&gt;
&lt;p&gt;重新放狗搜，看到这里，http://www.itdadao.com/articles/c15a745566p0.html&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Solution&lt;/p&gt;
&lt;p&gt;把SPARK_HOME/lib目录下的所有jar包都拷到zeppelin的lib下。&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;然后报了另外一个错&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;org.apache.thrift.transport.TTransportException
	at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
	at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
	at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
	at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
	at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
	at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
	at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_interpret(RemoteInterpreterService.java:249)
	at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.interpret(RemoteInterpreterService.java:233)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:269)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
	at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:279)
	at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
	at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:328)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;然后发现是开启了两个 zeppelin，一个在 8080，一个在 8081，这脑子进水的。。。&lt;/p&gt;
&lt;p&gt;但是，sc.version 可以运行了，%spark.sql 依然不可以，sqlContext.sql() 也不行，报错&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;java.lang.NoSuchMethodError: scala.runtime.ObjectRef.zero()Lscala/runtime/ObjectRef;
	at scala.util.parsing.combinator.Parsers$Parser.$tilde$greater(Parsers.scala)
	at org.apache.spark.sql.execution.SparkSQLParser.cache$lzycompute(SparkSQLParser.scala:75)
	at org.apache.spark.sql.execution.SparkSQLParser.cache(SparkSQLParser.scala:74)
	at org.apache.spark.sql.execution.SparkSQLParser.start$lzycompute(SparkSQLParser.scala:72)
	at org.apache.spark.sql.execution.SparkSQLParser.start(SparkSQLParser.scala:71)
	at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
	at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
	at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208)
	at org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43)
	at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231)
	at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:331)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:115)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
	at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
	at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;查了一下，这里说，http://stackoverflow.com/questions/28140173/why-does-submitting-a-job-fail-with-nosuchmethoderror-scala-runtime-volatileob&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;zero() on scala.runtime.VolatileObjectRef has been introduced in Scala 2.11 You probably have a library compiled against Scala 2.11 and running on a Scala 2.10 runtime.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;但是没有说怎么办。。貌似我们装系统和装 zeppelin 的时候也没有特别的去配置过 Scala 运行时环境啊&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/27728731/scala-code-throw-exception-in-spark&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/27728731/scala-code-throw-exception-in-spark&lt;/a&gt;，这里说要重新编译，但是这个太大动作了，于是从另外一个集群拷贝了一个 spark assembly 版本过来，发现还是不行&lt;/p&gt;
&lt;p&gt;于是找到这里，https://issues.apache.org/jira/browse/ZEPPELIN-605，大神们在讨论让 zeppelin 支持 scala 2.11 的事情，帖子很长，扫了一眼，也没看到什么特别的&lt;/p&gt;
&lt;p&gt;于是现在的猜想就是，我们在编译的时候，使用了 scala 2.11 的 lib 包，编译出来一些 jar 包，放到了 classpath 中，但是，实际运行的时候，classpath 中还有另外的一些 jar 包，包含了同名的 package，于是取版本号就取了旧版的&lt;/p&gt;
&lt;p&gt;所以，这里就涉及到一个问题，当我们的 classpath 中有同名的 package 的时候，他的取值顺序，看到这里，http://stackoverflow.com/questions/6935705/two-classes-with-same-name-in-classpath，他说&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Quoting Oracle:&lt;/p&gt;
&lt;p&gt;Specification order&lt;/p&gt;
&lt;p&gt;The order in which you specify multiple class path entries is important. The Java interpreter will look for classes in the directories in the order they appear in the class path variable. In the example above, the Java interpreter will first look for a needed class in the directory C:\java\MyClasses. Only if it doesn&amp;rsquo;t find a class with the proper name in that directory will the interpreter look in the C:\java\OtherClasses directory. The example mentioned:&lt;/p&gt;
&lt;p&gt;C:&amp;gt; java -classpath C:\java\MyClasses;C:\java\OtherClasses &amp;hellip; So yes, it will load the one appears in the classpath that specified first.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;关于如何打印当前 classpath，这里有说，https://gist.github.com/jessitron/8376139，&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-java&#34; data-lang=&#34;java&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;def&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;nf&#34;&gt;urlses&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;cl&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;ClassLoader&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;):&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;Array&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;java&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;net&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;URL&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;cl&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;match&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;case&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;null&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;Array&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;case&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;u&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;java&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;net&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;URLClassLoader&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;u&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;getURLs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;()&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;++&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;urlses&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;cl&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;getParent&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;case&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;_&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&amp;gt;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;urlses&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;cl&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;getParent&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;val&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;urls&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;urlses&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;getClass&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;getClassLoader&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;println&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;urls&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;filterNot&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;_&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;toString&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;contains&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;ivy&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)).&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;mkString&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;&amp;#34;\n&amp;#34;&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;但是通过这种方式，会导致一行过长，然后 zeppelin 页面报一个 Incomplete expression&lt;/p&gt;
&lt;p&gt;于是需要这样 urls.foreach{ println }&lt;/p&gt;
&lt;p&gt;看到输出的 classpath 如下：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/resources.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/rt.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/jsse.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/jce.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/charsets.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/jfr.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/localedata.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/sunpkcs11.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/sunjce_provider.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/zipfs.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/dnsns.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/sunec.jar
file:/data/zeppelin-0.6.2-bin-all/bin/./
file:/usr/java/jdk1.7.0_67-cloudera/lib/
file:/fwdata/zeppelin-0.6.2-bin-all/lib/javax.ws.rs-api-2.0-m10.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/apacheds-i18n-2.0.0-M15.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/maven-scm-provider-svnexe-1.4.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/scala-parser-combinators_2.11-1.0.4.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jsp-api-2.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jasper-runtime-5.5.23.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/JavaEWAH-0.7.9.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-collections-3.2.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/woodstox-core-asl-4.2.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-util-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/curator-recipes-2.6.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-xml-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/asm-3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/dom4j-1.6.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/c3p0-0.9.1.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/apacheds-kerberos-codec-2.0.0-M15.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/org.eclipse.jdt.annotation-1.1.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/lucene-core-5.3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/org.eclipse.jgit-4.1.1.201511131810-r.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-client-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/spark-assembly.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/hadoop-auth-2.6.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/javax.servlet-api-3.1.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/cxf-rt-core-2.7.7.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/reflections-0.9.8.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jackrabbit-jcr-commons-1.5.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/scala-reflect-2.11.7.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/lucene-queries-5.3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-security-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jsr305-1.3.9.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/cxf-rt-transports-http-jetty-2.7.7.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/shiro-core-1.2.3.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/scala-compiler-2.11.7.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/lucene-highlighter-5.3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/quartz-2.2.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/scala-xml_2.11-1.0.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/cxf-rt-bindings-xml-2.7.7.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jersey-server-1.13.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/websocket-server-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-util-6.1.26.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/websocket-client-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-servlet-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/guava-15.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-io-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/lucene-join-5.3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/slf4j-api-1.7.10.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/log4j-1.2.17.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-net-3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/scala-library-2.11.7.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/websocket-servlet-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/cxf-rt-transports-http-2.7.7.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/java-xmlbuilder-0.4.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-codec-1.5.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/gson-2.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-server-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-el-1.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/curator-client-2.6.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jets3t-0.9.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-httpclient-3.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/geronimo-javamail_1.4_spec-1.7.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/hadoop-common-2.6.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-cli-1.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/zeppelin-zengine-0.6.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/aws-java-sdk-s3-1.10.62.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/lucene-memory-5.3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/azure-storage-4.0.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jackrabbit-webdav-1.5.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/api-asn1-api-1.0.0-M20.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/lucene-queryparser-5.3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/hive-jdbc-1.1.0-cdh5.7.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-6.1.26.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jackson-databind-2.5.3.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/stax2-api-3.1.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/xmlschema-core-2.0.3.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/httpcore-4.3.3.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/slf4j-log4j12-1.7.10.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/xmlenc-0.52.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/cxf-rt-frontend-jaxrs-2.7.7.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/javassist-3.12.1.GA.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/maven-scm-api-1.4.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/wsdl4j-1.6.3.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jackson-mapper-asl-1.9.13.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-lang-2.5.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/joda-time-2.8.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/lucene-sandbox-5.3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-io-2.4.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jersey-servlet-1.13.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-math3-3.1.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jaxb-impl-2.2.6.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jackson-annotations-2.5.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/spark-examples-1.6.0-cdh5.7.2-hadoop2.6.0-cdh5.7.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/htrace-core-3.0.4.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/xz-1.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/websocket-api-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/hive-jdbc-1.1.0-cdh5.7.2-standalone.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/aws-java-sdk-core-1.10.62.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/lucene-analyzers-common-5.3.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-webapp-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/zeppelin-interpreter-0.6.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jersey-core-1.13.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/websocket-common-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/cxf-api-2.7.7.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jasper-compiler-5.5.23.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/plexus-utils-1.5.6.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/hadoop-annotations-2.6.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-compress-1.4.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jackson-core-2.5.3.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jackson-core-asl-1.9.13.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/api-util-1.0.0-M20.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/spark-assembly-1.6.0-cdh5.7.2-hadoop2.6.0-cdh5.7.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/curator-framework-2.6.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/shiro-web-1.2.3.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/regexp-1.3.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/httpclient-4.3.6.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/geronimo-servlet_3.0_spec-1.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/protobuf-java-2.5.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-vfs2-2.0.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/xml-apis-1.4.01.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/maven-scm-provider-svn-commons-1.4.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-beanutils-1.8.3.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-logging-1.1.1.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/commons-configuration-1.9.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jetty-http-9.2.15.v20160210.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/aws-java-sdk-kms-1.10.62.jar
file:/fwdata/zeppelin-0.6.2-bin-all/lib/jsch-0.1.53.jar
file:/fwdata/zeppelin-0.6.2-bin-all/zeppelin-server-0.6.2.jar
file:/fwdata/zeppelin-0.6.2-bin-all/conf/
file:/fwdata/zeppelin-0.6.2-bin-all/interpreter/spark/zeppelin-spark_2.11-0.6.2.jar
file:/opt/cloudera/parcels/CDH/lib/spark/conf/
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/spark/lib/spark-assembly-1.6.0-cdh5.7.2-hadoop2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH/lib/spark/conf/yarn-conf/
file:/etc/hive/conf/
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/ST4-4.0.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/accumulo-core-1.6.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/accumulo-fate-1.6.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/accumulo-start-1.6.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/accumulo-trace-1.6.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/activation-1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/ant-1.9.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/ant-launcher-1.9.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/antlr-2.7.7.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/antlr-runtime-3.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/aopalliance-1.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/apache-log4j-extras-1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/apache-log4j-extras-1.2.17.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/apacheds-i18n-2.0.0-M15.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/apacheds-kerberos-codec-2.0.0-M15.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/api-asn1-api-1.0.0-M20.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/api-util-1.0.0-M20.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/asm-3.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/asm-commons-3.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/asm-tree-3.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/async-1.4.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/asynchbase-1.5.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-1.7.6-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-compiler-1.7.6-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-ipc-1.7.6-cdh5.7.2-tests.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-ipc-1.7.6-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-mapred-1.7.6-cdh5.7.2-hadoop2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-maven-plugin-1.7.6-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-protobuf-1.7.6-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-service-archetype-1.7.6-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-thrift-1.7.6-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/aws-java-sdk-core-1.10.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/aws-java-sdk-kms-1.10.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/aws-java-sdk-s3-1.10.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/bonecp-0.8.0.RELEASE.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/calcite-avatica-1.0.0-incubating.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/calcite-core-1.0.0-incubating.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/calcite-linq4j-1.0.0-incubating.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-beanutils-1.7.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-beanutils-core-1.8.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-cli-1.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-codec-1.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-codec-1.8.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-collections-3.2.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-compiler-2.7.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-compress-1.4.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-configuration-1.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-daemon-1.0.13.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-dbcp-1.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-digester-1.8.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-el-1.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-httpclient-3.0.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-httpclient-3.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-io-2.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-jexl-2.1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-lang-2.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-logging-1.1.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-math-2.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-math3-3.1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-net-3.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-pool-1.5.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-vfs2-2.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-client-2.6.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-client-2.7.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-framework-2.6.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-framework-2.7.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-recipes-2.6.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-recipes-2.7.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/datanucleus-api-jdo-3.2.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/datanucleus-core-3.2.10.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/datanucleus-rdbms-3.2.9.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/derby-10.11.1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/eigenbase-properties-1.1.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/fastutil-6.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/findbugs-annotations-1.3.9-1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-avro-source-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-dataset-sink-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-file-channel-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-hdfs-sink-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-hive-sink-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-irc-sink-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-jdbc-channel-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-jms-source-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-kafka-channel-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-kafka-source-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-auth-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-configuration-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-core-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-elasticsearch-sink-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-embedded-agent-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-hbase-sink-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-kafka-sink-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-log4jappender-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-morphline-solr-sink-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-node-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-sdk-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-scribe-source-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-spillable-memory-channel-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-taildir-source-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-thrift-source-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-tools-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-twitter-source-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/geronimo-annotation_1.0_spec-1.1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/geronimo-jaspic_1.0_spec-1.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/geronimo-jta_1.1_spec-1.1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/groovy-all-2.4.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/gson-2.2.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guava-11.0.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guava-11.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guava-14.0.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guice-3.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guice-servlet-3.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-annotations-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-ant-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-archive-logs-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-archives-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-auth-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-aws-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-azure-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-common-2.6.0-cdh5.7.2-tests.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-common-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-datajoin-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-distcp-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-extras-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-gridmix-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-hdfs-2.6.0-cdh5.7.2-tests.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-hdfs-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-hdfs-nfs-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.2-tests.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-examples-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-nfs-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-openstack-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-rumen-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-sls-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-streaming-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-api-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-client-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-common-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-registry-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-common-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-tests-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hamcrest-core-1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hamcrest-core-1.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-annotations-1.2.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-client-1.2.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-common-1.2.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-hadoop-compat-1.2.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-hadoop2-compat-1.2.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-protocol-1.2.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-server-1.2.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/high-scale-lib-1.1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-accumulo-handler-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-ant-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-beeline-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-cli-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-common-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-contrib-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-exec-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-hbase-handler-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-hwi-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-jdbc-1.1.0-cdh5.7.2-standalone.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-jdbc-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-metastore-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-serde-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-service-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-shims-0.23-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-shims-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-shims-common-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-shims-scheduler-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-testutils-1.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/htrace-core-3.2.0-incubating.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/htrace-core4-4.0.1-incubating.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/httpclient-4.2.5.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/httpcore-4.2.5.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hue-plugins-3.9.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/irclib-1.10.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-annotations-2.2.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-core-2.2.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-core-asl-1.8.8.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-databind-2.2.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-jaxrs-1.8.8.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-mapper-asl-1.8.8.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-xc-1.8.8.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jamon-runtime-2.3.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/janino-2.7.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jasper-compiler-5.5.23.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jasper-runtime-5.5.23.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/java-xmlbuilder-0.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/javax.inject-1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jaxb-api-2.2.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jaxb-impl-2.2.3-1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jcommander-1.32.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jdo-api-3.0.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-client-1.9.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-core-1.9.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-guice-1.9.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-json-1.9.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-server-1.9.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jets3t-0.9.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jettison-1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-6.1.26.cloudera.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-all-7.6.0.v20120127.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-all-server-7.6.0.v20120127.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-util-6.1.26.cloudera.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-util-6.1.26.cloudera.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jline-2.11.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jline-2.12.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/joda-time-1.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/joda-time-2.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jopt-simple-4.9.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jpam-1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jsch-0.1.42.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jsp-api-2.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jsr305-1.3.9.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jsr305-3.0.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jta-1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/junit-4.11.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kafka-clients-0.9.0-kafka-2.0.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kafka_2.10-0.9.0-kafka-2.0.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kite-data-core-1.0.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kite-data-hbase-1.0.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kite-data-hive-1.0.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kite-hadoop-compatibility-1.0.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/leveldbjni-all-1.8.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/libfb303-0.9.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/libthrift-0.9.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/log4j-1.2.16.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/log4j-1.2.17.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/logredactor-1.0.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/lz4-1.3.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/mail-1.4.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/mapdb-0.9.9.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/maven-scm-api-1.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/maven-scm-provider-svn-commons-1.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/maven-scm-provider-svnexe-1.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/metrics-core-2.2.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/metrics-core-3.0.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/metrics-json-3.0.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/metrics-jvm-3.0.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/mina-core-2.0.4.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/mockito-all-1.8.5.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/netty-3.6.2.Final.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/netty-all-4.0.23.Final.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/opencsv-2.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/oro-2.0.8.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/paranamer-2.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-avro-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-cascading-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-column-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-common-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-encoding-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-format-2.1.0-cdh5.7.2-javadoc.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-format-2.1.0-cdh5.7.2-sources.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-format-2.1.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-generator-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-hadoop-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-hadoop-bundle-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-jackson-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-pig-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-pig-bundle-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-protobuf-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-scala_2.10-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-scrooge_2.10-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-test-hadoop2-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-thrift-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-tools-1.5.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/plexus-utils-1.5.6.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/protobuf-java-2.5.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/regexp-1.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/scala-library-2.10.5.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/serializer-2.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/servlet-api-2.5-20110124.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/servlet-api-2.5.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/slf4j-api-1.7.5.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/slf4j-log4j12-1.7.5.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/snappy-java-1.0.4.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/spark-1.6.0-cdh5.7.2-yarn-shuffle.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/spark-streaming-flume-sink_2.10-1.6.0-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/stax-api-1.0-2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/stax-api-1.0.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/stringtemplate-3.2.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/super-csv-2.2.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/tempus-fugit-1.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/trevni-avro-1.7.6-cdh5.7.2-hadoop2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/trevni-avro-1.7.6-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/trevni-core-1.7.6-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/twitter4j-core-3.0.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/twitter4j-media-support-3.0.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/twitter4j-stream-3.0.3.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/unused-1.0.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/velocity-1.5.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/velocity-1.7.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xalan-2.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xercesImpl-2.9.1.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xml-apis-1.3.04.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xmlenc-0.52.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xz-1.0.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/zkclient-0.7.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/zookeeper-3.4.5-cdh5.7.2.jar
file:/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/hive/lib/mysql-connector-java-5.1.34.jar
file:/data/zeppelin-0.6.2-bin-all/bin/./
file:/fwdata/zeppelin-0.6.2-bin-all/interpreter/spark/zeppelin-spark_2.11-0.6.2.jar
file:/data/zeppelin-0.6.2-bin-all/bin/
file:/usr/java/jdk1.7.0_67-cloudera/lib/
file:/data/zeppelin-0.6.2-bin-all/lib/javax.ws.rs-api-2.0-m10.jar
file:/data/zeppelin-0.6.2-bin-all/lib/apacheds-i18n-2.0.0-M15.jar
file:/data/zeppelin-0.6.2-bin-all/lib/maven-scm-provider-svnexe-1.4.jar
file:/data/zeppelin-0.6.2-bin-all/lib/scala-parser-combinators_2.11-1.0.4.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jsp-api-2.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jasper-runtime-5.5.23.jar
file:/data/zeppelin-0.6.2-bin-all/lib/JavaEWAH-0.7.9.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-collections-3.2.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/woodstox-core-asl-4.2.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-util-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/curator-recipes-2.6.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-xml-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/asm-3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/dom4j-1.6.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/c3p0-0.9.1.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/apacheds-kerberos-codec-2.0.0-M15.jar
file:/data/zeppelin-0.6.2-bin-all/lib/org.eclipse.jdt.annotation-1.1.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/lucene-core-5.3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/org.eclipse.jgit-4.1.1.201511131810-r.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-client-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/spark-assembly.jar
file:/data/zeppelin-0.6.2-bin-all/lib/hadoop-auth-2.6.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/javax.servlet-api-3.1.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/cxf-rt-core-2.7.7.jar
file:/data/zeppelin-0.6.2-bin-all/lib/reflections-0.9.8.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jackrabbit-jcr-commons-1.5.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/scala-reflect-2.11.7.jar
file:/data/zeppelin-0.6.2-bin-all/lib/lucene-queries-5.3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-security-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jsr305-1.3.9.jar
file:/data/zeppelin-0.6.2-bin-all/lib/cxf-rt-transports-http-jetty-2.7.7.jar
file:/data/zeppelin-0.6.2-bin-all/lib/shiro-core-1.2.3.jar
file:/data/zeppelin-0.6.2-bin-all/lib/scala-compiler-2.11.7.jar
file:/data/zeppelin-0.6.2-bin-all/lib/lucene-highlighter-5.3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/quartz-2.2.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/scala-xml_2.11-1.0.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/cxf-rt-bindings-xml-2.7.7.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jersey-server-1.13.jar
file:/data/zeppelin-0.6.2-bin-all/lib/websocket-server-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-util-6.1.26.jar
file:/data/zeppelin-0.6.2-bin-all/lib/websocket-client-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-servlet-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/guava-15.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-io-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/lucene-join-5.3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/slf4j-api-1.7.10.jar
file:/data/zeppelin-0.6.2-bin-all/lib/log4j-1.2.17.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-net-3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/scala-library-2.11.7.jar
file:/data/zeppelin-0.6.2-bin-all/lib/websocket-servlet-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/cxf-rt-transports-http-2.7.7.jar
file:/data/zeppelin-0.6.2-bin-all/lib/java-xmlbuilder-0.4.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-codec-1.5.jar
file:/data/zeppelin-0.6.2-bin-all/lib/gson-2.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-server-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-el-1.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/curator-client-2.6.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jets3t-0.9.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-httpclient-3.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/geronimo-javamail_1.4_spec-1.7.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/hadoop-common-2.6.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-cli-1.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/zeppelin-zengine-0.6.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/aws-java-sdk-s3-1.10.62.jar
file:/data/zeppelin-0.6.2-bin-all/lib/lucene-memory-5.3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/azure-storage-4.0.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jackrabbit-webdav-1.5.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/api-asn1-api-1.0.0-M20.jar
file:/data/zeppelin-0.6.2-bin-all/lib/lucene-queryparser-5.3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/hive-jdbc-1.1.0-cdh5.7.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-6.1.26.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jackson-databind-2.5.3.jar
file:/data/zeppelin-0.6.2-bin-all/lib/stax2-api-3.1.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/xmlschema-core-2.0.3.jar
file:/data/zeppelin-0.6.2-bin-all/lib/httpcore-4.3.3.jar
file:/data/zeppelin-0.6.2-bin-all/lib/slf4j-log4j12-1.7.10.jar
file:/data/zeppelin-0.6.2-bin-all/lib/xmlenc-0.52.jar
file:/data/zeppelin-0.6.2-bin-all/lib/cxf-rt-frontend-jaxrs-2.7.7.jar
file:/data/zeppelin-0.6.2-bin-all/lib/javassist-3.12.1.GA.jar
file:/data/zeppelin-0.6.2-bin-all/lib/maven-scm-api-1.4.jar
file:/data/zeppelin-0.6.2-bin-all/lib/wsdl4j-1.6.3.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jackson-mapper-asl-1.9.13.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-lang-2.5.jar
file:/data/zeppelin-0.6.2-bin-all/lib/joda-time-2.8.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/lucene-sandbox-5.3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-io-2.4.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jersey-servlet-1.13.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-math3-3.1.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jaxb-impl-2.2.6.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jackson-annotations-2.5.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/spark-examples-1.6.0-cdh5.7.2-hadoop2.6.0-cdh5.7.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/htrace-core-3.0.4.jar
file:/data/zeppelin-0.6.2-bin-all/lib/xz-1.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/websocket-api-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/hive-jdbc-1.1.0-cdh5.7.2-standalone.jar
file:/data/zeppelin-0.6.2-bin-all/lib/aws-java-sdk-core-1.10.62.jar
file:/data/zeppelin-0.6.2-bin-all/lib/lucene-analyzers-common-5.3.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-webapp-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/zeppelin-interpreter-0.6.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jersey-core-1.13.jar
file:/data/zeppelin-0.6.2-bin-all/lib/websocket-common-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/cxf-api-2.7.7.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jasper-compiler-5.5.23.jar
file:/data/zeppelin-0.6.2-bin-all/lib/plexus-utils-1.5.6.jar
file:/data/zeppelin-0.6.2-bin-all/lib/hadoop-annotations-2.6.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-compress-1.4.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jackson-core-2.5.3.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jackson-core-asl-1.9.13.jar
file:/data/zeppelin-0.6.2-bin-all/lib/api-util-1.0.0-M20.jar
file:/data/zeppelin-0.6.2-bin-all/lib/spark-assembly-1.6.0-cdh5.7.2-hadoop2.6.0-cdh5.7.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/curator-framework-2.6.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/shiro-web-1.2.3.jar
file:/data/zeppelin-0.6.2-bin-all/lib/regexp-1.3.jar
file:/data/zeppelin-0.6.2-bin-all/lib/httpclient-4.3.6.jar
file:/data/zeppelin-0.6.2-bin-all/lib/geronimo-servlet_3.0_spec-1.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/protobuf-java-2.5.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-vfs2-2.0.jar
file:/data/zeppelin-0.6.2-bin-all/lib/xml-apis-1.4.01.jar
file:/data/zeppelin-0.6.2-bin-all/lib/maven-scm-provider-svn-commons-1.4.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-beanutils-1.8.3.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-logging-1.1.1.jar
file:/data/zeppelin-0.6.2-bin-all/lib/commons-configuration-1.9.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jetty-http-9.2.15.v20160210.jar
file:/data/zeppelin-0.6.2-bin-all/lib/aws-java-sdk-kms-1.10.62.jar
file:/data/zeppelin-0.6.2-bin-all/lib/jsch-0.1.53.jar
file:/data/zeppelin-0.6.2-bin-all/zeppelin-server-0.6.2.jar
file:/data/zeppelin-0.6.2-bin-all/conf/
file:/data/zeppelin-0.6.2-bin-all/interpreter/spark/zeppelin-spark_2.11-0.6.2.jar
file:/data/zeppelin-0.6.2-bin-all/lib/zeppelin-interpreter-0.6.2.jar
file:/data/zeppelin-0.6.2-bin-all/interpreter/spark/zeppelin-spark_2.11-0.6.2.jar
file:/etc/spark/conf.cloudera.spark_on_yarn/
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/spark-assembly-1.6.0-cdh5.7.2-hadoop2.6.0-cdh5.7.2.jar
file:/etc/spark/conf.cloudera.spark_on_yarn/yarn-conf/
file:/etc/hive/conf.cloudera.hive/
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/ST4-4.0.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/accumulo-core-1.6.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/accumulo-fate-1.6.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/accumulo-start-1.6.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/accumulo-trace-1.6.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/activation-1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/ant-1.9.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/ant-launcher-1.9.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/antlr-2.7.7.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/antlr-runtime-3.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/aopalliance-1.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/apache-log4j-extras-1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/apache-log4j-extras-1.2.17.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/apacheds-i18n-2.0.0-M15.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/apacheds-kerberos-codec-2.0.0-M15.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/api-asn1-api-1.0.0-M20.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/api-util-1.0.0-M20.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/asm-3.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/asm-commons-3.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/asm-tree-3.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/async-1.4.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/asynchbase-1.5.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-1.7.6-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-compiler-1.7.6-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-ipc-1.7.6-cdh5.7.2-tests.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-ipc-1.7.6-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-mapred-1.7.6-cdh5.7.2-hadoop2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-maven-plugin-1.7.6-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-protobuf-1.7.6-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-service-archetype-1.7.6-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/avro-thrift-1.7.6-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/aws-java-sdk-core-1.10.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/aws-java-sdk-kms-1.10.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/aws-java-sdk-s3-1.10.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/bonecp-0.8.0.RELEASE.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/calcite-avatica-1.0.0-incubating.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/calcite-core-1.0.0-incubating.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/calcite-linq4j-1.0.0-incubating.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-beanutils-1.7.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-beanutils-core-1.8.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-cli-1.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-codec-1.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-codec-1.8.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-collections-3.2.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-compiler-2.7.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-compress-1.4.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-configuration-1.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-daemon-1.0.13.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-dbcp-1.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-digester-1.8.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-el-1.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-httpclient-3.0.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-httpclient-3.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-io-2.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-jexl-2.1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-lang-2.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-logging-1.1.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-math-2.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-math3-3.1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-net-3.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-pool-1.5.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/commons-vfs2-2.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-client-2.6.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-client-2.7.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-framework-2.6.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-framework-2.7.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-recipes-2.6.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/curator-recipes-2.7.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/datanucleus-api-jdo-3.2.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/datanucleus-core-3.2.10.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/datanucleus-rdbms-3.2.9.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/derby-10.11.1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/eigenbase-properties-1.1.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/fastutil-6.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/findbugs-annotations-1.3.9-1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-avro-source-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-dataset-sink-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-file-channel-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-hdfs-sink-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-hive-sink-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-irc-sink-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-jdbc-channel-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-jms-source-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-kafka-channel-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-kafka-source-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-auth-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-configuration-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-core-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-elasticsearch-sink-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-embedded-agent-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-hbase-sink-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-kafka-sink-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-log4jappender-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-morphline-solr-sink-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-node-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-ng-sdk-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-scribe-source-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-spillable-memory-channel-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-taildir-source-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-thrift-source-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-tools-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/flume-twitter-source-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/geronimo-annotation_1.0_spec-1.1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/geronimo-jaspic_1.0_spec-1.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/geronimo-jta_1.1_spec-1.1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/groovy-all-2.4.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/gson-2.2.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guava-11.0.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guava-11.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guava-14.0.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guice-3.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/guice-servlet-3.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-annotations-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-ant-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-archive-logs-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-archives-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-auth-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-aws-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-azure-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-common-2.6.0-cdh5.7.2-tests.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-common-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-datajoin-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-distcp-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-extras-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-gridmix-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-hdfs-2.6.0-cdh5.7.2-tests.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-hdfs-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-hdfs-nfs-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-app-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-common-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-core-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-hs-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.2-tests.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-nativetask-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-mapreduce-examples-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-nfs-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-openstack-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-rumen-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-sls-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-streaming-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-api-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-applications-distributedshell-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-client-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-common-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-registry-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-common-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-nodemanager-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-resourcemanager-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-tests-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hadoop-yarn-server-web-proxy-2.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hamcrest-core-1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hamcrest-core-1.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-annotations-1.2.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-client-1.2.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-common-1.2.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-hadoop-compat-1.2.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-hadoop2-compat-1.2.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-protocol-1.2.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hbase-server-1.2.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/high-scale-lib-1.1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-accumulo-handler-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-ant-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-beeline-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-cli-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-common-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-contrib-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-exec-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-hbase-handler-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-hwi-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-jdbc-1.1.0-cdh5.7.2-standalone.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-jdbc-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-metastore-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-serde-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-service-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-shims-0.23-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-shims-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-shims-common-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-shims-scheduler-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hive-testutils-1.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/htrace-core-3.2.0-incubating.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/htrace-core4-4.0.1-incubating.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/httpclient-4.2.5.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/httpcore-4.2.5.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/hue-plugins-3.9.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/irclib-1.10.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-annotations-2.2.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-core-2.2.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-core-asl-1.8.8.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-databind-2.2.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-jaxrs-1.8.8.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-mapper-asl-1.8.8.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jackson-xc-1.8.8.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jamon-runtime-2.3.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/janino-2.7.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jasper-compiler-5.5.23.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jasper-runtime-5.5.23.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/java-xmlbuilder-0.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/javax.inject-1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jaxb-api-2.2.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jaxb-impl-2.2.3-1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jcommander-1.32.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jdo-api-3.0.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-client-1.9.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-core-1.9.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-guice-1.9.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-json-1.9.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jersey-server-1.9.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jets3t-0.9.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jettison-1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-6.1.26.cloudera.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-all-7.6.0.v20120127.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-all-server-7.6.0.v20120127.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-util-6.1.26.cloudera.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jetty-util-6.1.26.cloudera.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jline-2.11.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jline-2.12.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/joda-time-1.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/joda-time-2.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jopt-simple-4.9.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jpam-1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jsch-0.1.42.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jsp-api-2.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jsr305-1.3.9.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jsr305-3.0.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/jta-1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/junit-4.11.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kafka-clients-0.9.0-kafka-2.0.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kafka_2.10-0.9.0-kafka-2.0.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kite-data-core-1.0.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kite-data-hbase-1.0.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kite-data-hive-1.0.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/kite-hadoop-compatibility-1.0.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/leveldbjni-all-1.8.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/libfb303-0.9.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/libthrift-0.9.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/log4j-1.2.16.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/log4j-1.2.17.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/logredactor-1.0.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/lz4-1.3.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/mail-1.4.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/mapdb-0.9.9.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/maven-scm-api-1.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/maven-scm-provider-svn-commons-1.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/maven-scm-provider-svnexe-1.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/metrics-core-2.2.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/metrics-core-3.0.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/metrics-json-3.0.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/metrics-jvm-3.0.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/microsoft-windowsazure-storage-sdk-0.6.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/mina-core-2.0.4.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/mockito-all-1.8.5.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/netty-3.6.2.Final.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/netty-all-4.0.23.Final.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/opencsv-2.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/oro-2.0.8.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/paranamer-2.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-avro-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-cascading-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-column-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-common-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-encoding-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-format-2.1.0-cdh5.7.2-javadoc.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-format-2.1.0-cdh5.7.2-sources.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-format-2.1.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-generator-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-hadoop-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-hadoop-bundle-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-jackson-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-pig-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-pig-bundle-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-protobuf-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-scala_2.10-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-scrooge_2.10-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-test-hadoop2-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-thrift-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/parquet-tools-1.5.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/plexus-utils-1.5.6.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/protobuf-java-2.5.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/regexp-1.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/scala-library-2.10.5.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/serializer-2.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/servlet-api-2.5-20110124.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/servlet-api-2.5.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/slf4j-api-1.7.5.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/slf4j-log4j12-1.7.5.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/snappy-java-1.0.4.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/spark-1.6.0-cdh5.7.2-yarn-shuffle.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/spark-streaming-flume-sink_2.10-1.6.0-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/stax-api-1.0-2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/stax-api-1.0.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/stringtemplate-3.2.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/super-csv-2.2.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/tempus-fugit-1.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/trevni-avro-1.7.6-cdh5.7.2-hadoop2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/trevni-avro-1.7.6-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/trevni-core-1.7.6-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/twitter4j-core-3.0.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/twitter4j-media-support-3.0.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/twitter4j-stream-3.0.3.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/unused-1.0.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/velocity-1.5.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/velocity-1.7.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xalan-2.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xercesImpl-2.9.1.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xml-apis-1.3.04.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xmlenc-0.52.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/xz-1.0.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/zkclient-0.7.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/jars/zookeeper-3.4.5-cdh5.7.2.jar
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/hadoop/LICENSE.txt
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/hadoop/NOTICE.txt
file:/data/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/hive/lib/mysql-connector-java-5.1.34.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/localedata.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/sunpkcs11.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/sunjce_provider.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/zipfs.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/dnsns.jar
file:/usr/java/jdk1.7.0_67-cloudera/jre/lib/ext/sunec.jar
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;跟文昌讨论之后提到一个问题，既然命令行下的 spark-shell 可以，而且命令行也用的是 scala 2.10 的 spark-assembly，那为什么 zeppelin 就不行呢&lt;/p&gt;
&lt;p&gt;这里还涉及到一个问题，就是如何判定某个 class 是从哪个 jar 来的，看到这里，http://stackoverflow.com/questions/1983839/determine-which-jar-file-a-class-is-from&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I am not in front of an IDE right now, just looking at the API specs.&lt;/p&gt;
&lt;p&gt;CodeSource src = MyClass.class.getProtectionDomain().getCodeSource(); if (src != null) { URL jar = src.getLocation(); } I want to determine which JAR file a class is from. Is this the way to do it?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Yes. It works for all classes except classes loaded by bootstrap classloader. The other way to determine is:&lt;/p&gt;
&lt;p&gt;Class klass = String.class; URL location = klass.getResource(&amp;rsquo;/&amp;rsquo; + klass.getName().replace(&amp;rsquo;.&amp;rsquo;, &amp;lsquo;/&amp;rsquo;) + &amp;ldquo;.class&amp;rdquo;); As notnoop pointed out getProtectionDomain().getCodeSource().getLocation() method returns the location of the class file itself. For example:&lt;/p&gt;
&lt;p&gt;jar:file:/jdk/jre/lib/rt.jar!/java/lang/String.class file:/projects/classes/pkg/MyClass$1.class The klass.getResource() method returns the location of the jar file or CLASSPATH&lt;/p&gt;
&lt;p&gt;file:/Users/home/java/libs/ejb3-persistence-1.0.2.GA.jar file:/projects/classes&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;于是反过来猜想，这个地方是 zeppelin 的某些调用使用到了 scala 2.11 的函数，但是，由于我们的 spark-assembly 是用 2.10 编译的，而且 spark-assembly 的 classpath 还相对靠前，所以导致出错，那么先在 zeppelin 把 scala 的版本打印出来，看到这里，http://stackoverflow.com/questions/6121403/how-do-i-get-the-scala-version-from-within-scala-itself，使用以下方式&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This will work without access to scala-compiler.jar:&lt;/p&gt;
&lt;p&gt;Welcome to Scala version 2.9.1.final (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_26). Type in expressions to have them evaluated. Type :help for more information.&lt;/p&gt;
&lt;p&gt;scala&amp;gt; util.Properties.versionString res0: java.lang.String = version 2.9.1.final&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;得到的结果是&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;util.Properties.versionString
res3: String = version 2.10.5
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;然后，重新解压一个 zeppelin 的干净版本&lt;/p&gt;
&lt;p&gt;得到的结果是：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;util.Properties.versionString
res0: String = version 2.11.7
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;果然，那么，在这种情况下（指尚未配置 zeppelin-env.sh 的 SPARK_HOME 和没有拷贝 spark-assembly 的包过来的时候），sc 变量能不能用呢&lt;/p&gt;
&lt;p&gt;答案是可以&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;sc.version
res2: String = 2.0.0
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;而且版本还忒高，用的是 2.0 的版本&lt;/p&gt;
&lt;p&gt;那么这个 spark 包在哪里呢，通过查找&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ find ./ -iname &amp;#34;*spark*&amp;#34;
./run/zeppelin-interpreter-spark-hdfs-BJ-KTDBTEST04.pid
./interpreter/spark
./interpreter/spark/pyspark
./interpreter/spark/pyspark/pyspark.zip
./interpreter/spark/R/lib/SparkR
./interpreter/spark/R/lib/SparkR/test_support/sparktestjar_2.10-1.0.jar
./interpreter/spark/R/lib/SparkR/tests/testthat/test_sparkSQL.R
./interpreter/spark/R/lib/SparkR/help/SparkR.rdb
./interpreter/spark/R/lib/SparkR/help/SparkR.rdx
./interpreter/spark/R/lib/SparkR/html/sparkRHive.init-deprecated.html
./interpreter/spark/R/lib/SparkR/html/spark.survreg.html
./interpreter/spark/R/lib/SparkR/html/spark.lapply.html
./interpreter/spark/R/lib/SparkR/html/sparkR.session.html
./interpreter/spark/R/lib/SparkR/html/spark_partition_id.html
./interpreter/spark/R/lib/SparkR/html/spark.kmeans.html
./interpreter/spark/R/lib/SparkR/html/SparkDataFrame.html
./interpreter/spark/R/lib/SparkR/html/spark.glm.html
./interpreter/spark/R/lib/SparkR/html/sparkR.conf.html
./interpreter/spark/R/lib/SparkR/html/sparkR.init-deprecated.html
./interpreter/spark/R/lib/SparkR/html/sparkR.session.stop.html
./interpreter/spark/R/lib/SparkR/html/sparkRSQL.init-deprecated.html
./interpreter/spark/R/lib/SparkR/html/spark.naiveBayes.html
./interpreter/spark/R/lib/SparkR/R/SparkR.rdb
./interpreter/spark/R/lib/SparkR/R/SparkR
./interpreter/spark/R/lib/SparkR/R/SparkR.rdx
./interpreter/spark/R/lib/sparkr.zip
./interpreter/spark/zeppelin-spark_2.11-0.6.2.jar
./interpreter/spark/dep/zeppelin-spark-dependencies_2.11-0.6.2.jar
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;可以发现 ./interpreter/spark/dep/zeppelin-spark-dependencies_2.11-0.6.2.jar 有一个 182M 的包，估计就是这个是 spark 2.0 的包了&lt;/p&gt;
&lt;p&gt;那么如果我们直接在他自带的 spark 包上执行 show tables 会怎样呢&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;%spark.sql
show tables
---------
null
set zeppelin.spark.sql.stacktrace = true to see full stacktrace
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;打开配置再跑一次&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;%spark.sql
show tables
---------
java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx------
	at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:612)
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
	at org.apache.spark.sql.hive.client.HiveClientImpl.&amp;lt;init&amp;gt;(HiveClientImpl.scala:171)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
	at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
	at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
	at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
	at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
	at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
	at org.apache.spark.sql.hive.HiveSharedState.externalCatalog(HiveSharedState.scala:45)
	at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:50)
	at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
	at org.apache.spark.sql.hive.HiveSessionState$$anon$1.&amp;lt;init&amp;gt;(HiveSessionState.scala:63)
	at org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
	at org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:115)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
	at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
	at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;到了这里其实有两个思路，一个是调试 zeppelin 自带的 spark 包，让他能够正确读取 hive 和运行 spark-sql，但是这个思路的问题在于，由于我们实际跑的是 spark 1.6 的夜间任务，那么使用 2.0 来调试，可能会导致开发调试和实际运行的行为不一致&lt;/p&gt;
&lt;p&gt;那么另外一个的思路就是用 zeppelin 的低版本，找到一个使用 scala 2.10 的包，然后再联合我们的 spark-assembly 的包&lt;/p&gt;
&lt;p&gt;（在这里回头去看 &lt;a class=&#34;link&#34; href=&#34;https://issues.apache.org/jira/browse/ZEPPELIN-605&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://issues.apache.org/jira/browse/ZEPPELIN-605&lt;/a&gt; 这个贴，其实人家说的很明显了，zeppelin 0.61 版本就使用了 spark 2.0 和 scala 2.11）&lt;/p&gt;
&lt;p&gt;回到 zeppelin 的官网，回溯历史版本，看到这里 release note，https://zeppelin.apache.org/releases/zeppelin-release-0.6.0.html，里面提到&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Default backend version has been bumped up as follow:&lt;/p&gt;
&lt;p&gt;Cassandra: 3.0.1 Elasticsearch: 2.3.3 Flink: 1.0.3 Ignite: 1.6.0 Lens: 2.5.0-beta Spark: 1.6.1 Spark 2.0 support planned for 0.6.1 release&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;就是他了&lt;/p&gt;
&lt;p&gt;然而，当我们尝试下载的时候报错，http://101.96.8.164/archive.apache.org/dist/zeppelin/zeppelin-0.6.0/zeppelin-0.6.0-bin-all.tgz&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Bandwidth limit exceeded&lt;/p&gt;
&lt;p&gt;The daily allowance of 5GB for this IP has been exceeded, and downloads disabled until midnight, UTC (circa 6 hours from now). If you have any questions about this, feel free to reach out to us at &lt;a class=&#34;link&#34; href=&#34;mailto:infrastructure@apache.org&#34; &gt;infrastructure@apache.org&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;这是欠费停机的节奏啊。。。。&lt;/p&gt;
&lt;p&gt;通过 Google，转战 &lt;a class=&#34;link&#34; href=&#34;https://archive.apache.org/dist/zeppelin/zeppelin-0.6.0/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://archive.apache.org/dist/zeppelin/zeppelin-0.6.0/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;然后熟练地修改 zeppelin-env.sh 中的 SPARK_HOME，拷贝自行编译支持 spark-sql 的 assembly 包过来&lt;/p&gt;
&lt;p&gt;至此，终于可以使用 %spark.sql 执行 show tables 命令了&lt;/p&gt;
&lt;p&gt;那么，接下来，还有以下问题：&lt;/p&gt;
&lt;p&gt;【1】yarn 任务的启动参数问题，默认只从集群分配了 2560MB 的内存，这个是不够的 【2】zeppelin 对 yarn 任务的控制，之前用 spark thrift server 通过 jdbc 的问题在于 thrift server 在长期运行之后可能会出错，那么 zeppelin 对 yarn-client 模式下的 yarn 任务控制能力如何 【3】多人同时执行复杂 sql 的情况会怎样&lt;/p&gt;
&lt;p&gt;对于【1】【2】而言，我们可以发现，zeppelin 在 yarn 上的内存是动态分配的，会在需要运行 sql 的时候动态的从 yarn 上分配内存，并在一段空闲时间后自动释放，日志如下：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;INFO [2016-12-13 15:50:35,958] ({task-result-getter-2} Logging.scala[logInfo]:58) - Finished task 94.0 in stage 24.0 (TID 30354) in 1933 ms on BJ-KTDBTEST05 (197/200)
 INFO [2016-12-13 15:50:35,959] ({task-result-getter-3} Logging.scala[logInfo]:58) - Finished task 183.0 in stage 24.0 (TID 30443) in 652 ms on BJ-KTDBTEST06 (198/200)
 INFO [2016-12-13 15:50:35,982] ({task-result-getter-0} Logging.scala[logInfo]:58) - Finished task 182.0 in stage 24.0 (TID 30442) in 765 ms on BJ-KTDBTEST05 (199/200)
 INFO [2016-12-13 15:50:35,996] ({task-result-getter-1} Logging.scala[logInfo]:58) - Finished task 179.0 in stage 24.0 (TID 30439) in 906 ms on BJ-KTDBTEST05 (200/200)
 INFO [2016-12-13 15:50:35,997] ({task-result-getter-1} Logging.scala[logInfo]:58) - Removed TaskSet 24.0, whose tasks have all completed, from pool default
 INFO [2016-12-13 15:50:35,997] ({dag-scheduler-event-loop} Logging.scala[logInfo]:58) - ResultStage 24 (take at NativeMethodAccessorImpl.java:-2) finished in 2.897 s
 INFO [2016-12-13 15:50:35,998] ({pool-2-thread-5} Logging.scala[logInfo]:58) - Job 14 finished: take at NativeMethodAccessorImpl.java:-2, took 7.237246 s
 INFO [2016-12-13 15:50:36,005] ({pool-2-thread-5} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1481615335715 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter1861564560
 INFO [2016-12-13 15:51:35,668] ({spark-dynamic-executor-allocation} Logging.scala[logInfo]:58) - Requesting to kill executor(s) 161
 INFO [2016-12-13 15:51:35,678] ({spark-dynamic-executor-allocation} Logging.scala[logInfo]:58) - Removing executor 161 because it has been idle for 60 seconds (new desired total will be 42)
 INFO [2016-12-13 15:51:35,678] ({spark-dynamic-executor-allocation} Logging.scala[logInfo]:58) - Requesting to kill executor(s) 152
 INFO [2016-12-13 15:51:35,686] ({spark-dynamic-executor-allocation} Logging.scala[logInfo]:58) - Removing executor 152 because it has been idle for 60 seconds (new desired total will be 41)
 INFO [2016-12-13 15:51:35,686] ({spark-dynamic-executor-allocation} Logging.scala[logInfo]:58) - Requesting to kill executor(s) 146
 INFO [2016-12-13 15:51:35,692] ({spark-dynamic-executor-allocation} Logging.scala[logInfo]:58) - Removing executor 146 because it has been idle for 60 seconds (new desired total will be 40)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;从 yarn 的监控页面上看，可以看到一些统计指标，例如&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Application Metrics
Total Resource Preempted:	&amp;lt;memory:0, vCores:0&amp;gt;
Total Number of Non-AM Containers Preempted:	0
Total Number of AM Containers Preempted:	0
Resource Preempted from Current Attempt:	&amp;lt;memory:0, vCores:0&amp;gt;
Number of Non-AM Containers Preempted from Current Attempt:	0
Aggregate Resource Allocation:	134634368 MB-seconds, 88939 vcore-seconds
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;但是搜了一圈，没有看到对于这些指标的详细说明文档，只能从名字上自己去猜测理解&lt;/p&gt;
&lt;p&gt;然后还发现 spark 的页面上有提示 2 Fair Scheduler Pools，其中有 SchedulingMode 为 FAIR 和 FIFO 的，https://www.oschina.net/translate/spark-job-scheduling，http://www.cnblogs.com/cenyuhai/p/3537249.html，http://ifeve.com/spark-schedule/，这些文档都有讨论 FAIR 这种调度模式，但是对于【3】而言，我实测发现，如果多人同时提交了 sql，还是顺序执行的，这就比较坑爹了。。&lt;/p&gt;
&lt;p&gt;最后，需要配置一下默认的 interpreter，让 spark.sql 成为默认的，这个相对简单，根据这里，http://stackoverflow.com/questions/33834401/apache-zeppelin-set-default-interpreter，把配置改为如下，重启即可&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-xml&#34; data-lang=&#34;xml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;&amp;lt;value&amp;gt;&lt;/span&gt;org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.rinterpreter.RRepl,org.apache.zeppelin.rinterpreter.KnitR,org.apache.zeppelin.spark.SparkRInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.angular.AngularInterpreter,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.file.HDFSFileInterpreter,org.apache.zeppelin.flink.FlinkInterpreter,,org.apache.zeppelin.python.PythonInterpreter,org.apache.zeppelin.lens.LensInterpreter,org.apache.zeppelin.ignite.IgniteInterpreter,org.apache.zeppelin.ignite.IgniteSqlInterpreter,org.apache.zeppelin.cassandra.CassandraInterpreter,org.apache.zeppelin.geode.GeodeOqlInterpreter,org.apache.zeppelin.postgresql.PostgreSqlInterpreter,org.apache.zeppelin.jdbc.JDBCInterpreter,org.apache.zeppelin.kylin.KylinInterpreter,org.apache.zeppelin.elasticsearch.ElasticsearchInterpreter,org.apache.zeppelin.scalding.ScaldingInterpreter,org.apache.zeppelin.alluxio.AlluxioInterpreter,org.apache.zeppelin.hbase.HbaseInterpreter,org.apache.zeppelin.livy.LivySparkInterpreter,org.apache.zeppelin.livy.LivyPySparkInterpreter,org.apache.zeppelin.livy.LivySparkRInterpreter,org.apache.zeppelin.livy.LivySparkSQLInterpreter&lt;span class=&#34;nt&#34;&gt;&amp;lt;/value&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;-&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;ndash;&lt;/p&gt;
&lt;p&gt;2016-12-13 17:14:52 追加&lt;/p&gt;
&lt;p&gt;发现 &lt;a class=&#34;link&#34; href=&#34;http://terrence.logdown.com/posts/848908&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://terrence.logdown.com/posts/848908&lt;/a&gt;， &lt;a class=&#34;link&#34; href=&#34;http://terrence.logdown.com/posts/1172854-zeppelin-livy-server-supports-multiple-users-review&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://terrence.logdown.com/posts/1172854-zeppelin-livy-server-supports-multiple-users-review&lt;/a&gt;，这两个文章都写的很好，提到一个 zeppelin.spark.concurrentSQL，貌似解决了上述的【3】问题，但是打开这个配置之后，调度模式反而变成了 FIFO，真是神奇的地球&lt;/p&gt;
&lt;p&gt;-&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;ndash;&lt;/p&gt;
&lt;p&gt;2016-12-13 17:43:41 &lt;a class=&#34;link&#34; href=&#34;http://www.jianshu.com/p/297c3893d7e7&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://www.jianshu.com/p/297c3893d7e7&lt;/a&gt; 这里也对 livy 有一些介绍&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;历史评论&#34;&gt;历史评论
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;lework&lt;/strong&gt; (2017-01-22 15:19:40):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;你好，你有试过spark on yann 模式么？我这边架构采用cdh5.9.0的hadoop集群，在使用zeppelin 0.6.1  spark on yarn 的时候，执行sc.version是正常的，但是执行RDD命令时出现 java.lang.ClassNotFoundException: $line6431442222.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1 错误？还希望博主有空能帮助解答下。谢谢 。&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;ZRJ&lt;/strong&gt; (2017-01-23 13:24:49):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;试试 0.6.0？&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;frone&lt;/strong&gt; (2017-09-29 11:52:01):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;厉害了，我搞了三天zeppelin，目前在尝试自己编译，但是一直报错，看来还是要用6.0啊 6.1，6.2, 7.3都不行。。。。。。&lt;/p&gt;
&lt;p&gt;楼主是租了一个国外的服务器 搭的这个博客么，有搭VPS什么的没？&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;ZRJ&lt;/strong&gt; (2017-09-29 20:43:49):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;应该是 0.6.0 可以用，具体的情况我也忘了，时间太久了&lt;/p&gt;
&lt;p&gt;博客不用 vps，流量小，没必要，用一个共享主机就可以了&lt;/p&gt;
&lt;/blockquote&gt;
</description>
        </item>
        <item>
        <title>spark on hive 模式导致读写 hdfs 失败</title>
        <link>https://blog.zrj.me/posts/2016-11-27-1693/</link>
        <pubDate>Sun, 27 Nov 2016 16:31:17 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2016-11-27-1693/</guid>
        <description>&lt;p&gt;spark sql 操作 hive 表，底下的支撑其实还是 hdfs，之前的集群，hdfs 没有做 HA，倒也相安无事，不过最新 spark sql 的计算任务迁移到了一个新的集群，刚迁移过去的时候，计算任务是能够正常跑的，但是，后来这个集群上的 hdfs 做了 HA，问题就来了&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1826)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1431)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4235)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:895)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:527)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:824)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;可以看到报了一个错，叫做 Operation category READ is not supported in state standby，先看了一下集群的情况，原来是在做 HA 的过程 hdfs 的 active 节点发生了变化，原来是 node1 作为 hdfs 的 active 节点，现在 node1 变成了 standby 节点，而 node2 变成了 active 节点。&lt;/p&gt;
&lt;p&gt;这个变化其实没有任何问题，因为在 HA 的情况，任何节点都应该被允许成为 active，而上层应用程序也应该能够处理 HA 的 hdfs集群的 active 节点发生转移的时候的情况。所以，初步的猜想是，我们的 spark sql 任务在读写 hive 的时候固定的从 node1 去操作 hdfs 文件了，这个问题应该是由某一个配置控制的，那么就在 cdh 的配置页面中找，但是并没有特别清晰的选项用于控制这个，于是直接上机器去看配置文件，cdh 的 spark 读写 hdfs 的配置在 /etc/spark/conf.cloudera.spark_on_yarn/yarn-conf/hdfs-site.xml&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-xml&#34; data-lang=&#34;xml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;&amp;lt;property&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;name&amp;gt;&lt;/span&gt;dfs.ha.namenodes.nameservice1&lt;span class=&#34;nt&#34;&gt;&amp;lt;/name&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;    &lt;span class=&#34;nt&#34;&gt;&amp;lt;value&amp;gt;&lt;/span&gt;namenode50,namenode84&lt;span class=&#34;nt&#34;&gt;&amp;lt;/value&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;nt&#34;&gt;&amp;lt;/property&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;可以看到其中有配置了多个 nameservice 的，按理说应该没有问题，于是上网一通 google，但是找到的都是牛头不对马嘴的解决方案，于是回来自己思索。。。&lt;/p&gt;
&lt;p&gt;手贱查看了一下集群中其他节点的同名配置，发现集群中其他机器的配置文件不一致，猜想是配置 HA 的时候没有下发配置文件到集群中，于是在 cdh 页面上手工下发配置并重启集群，然而。。这并没有什么卵用。。&lt;/p&gt;
&lt;p&gt;出于排插的目的，我试了一下 beeline 客户端，用 hdfs 用户启动 beeline，发现是可以正常的 create table 和 drop table 的，在 hdfs 的 web ui 中可以看到 /hive/warehouse 目录下能够相应的创建和删除对应的表的文件夹，这就说明起码在 beeline 这个环节上是没有问题的，那么就应该是在调用 spark sql 的时候使用的配置有问题，那么又尝试了一下不用 -f 命令传入脚本，而是直接用 /opt/cloudera/parcels/CDH/lib/spark/bin/spark-sql 启动终端，然后执行 create 命令，发现也是正常的&lt;/p&gt;
&lt;p&gt;然后又是一顿搜，看到这个，https://community.hortonworks.com/questions/9790/orgapachehadoopipcstandbyexception.html&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I know turning the nn1 back to ACTIVE solves the issue. Looking for workarounds which doesn&amp;rsquo;t require this manual operation. Thanks in advance.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;于是先把这个问题绕过去，然后把 spark sql 脚本跑起来，但是后来用 beeline 去执行 select 语句的时候又报错：&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Error: Error while compiling statement: FAILED: SemanticException Unable to determine if hdfs://.... is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://..., expected: hdfs://nameservice1 (state=42000,code=40000
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;转而尝试使用 /opt/cloudera/parcels/CDH/lib/spark/bin/spark-sql 的命令行可以正常执行，这里其实背后的原因是 beeline 连接的是 hive 的 thriftserver，而不是 spark thrift server，看 ps 命令的输出&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;hive      6075 24337  0 16:53 ?        00:01:09 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx256m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Xms856686592 -Xmx856686592 -XX:MaxPermSize=512M -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/opt/cm-5.7.2/lib64/cmf/service/common/killparent.sh -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/hive/lib/hive-service-1.1.0-cdh5.7.2.jar org.apache.hive.service.server.HiveServer2
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;这个地方其实有两个事情要做，首先，应该把 10000 这个端口让出来由 spark thrift server 来提供服务，这块涉及到 cdh 的配置，需要把 hive thrift server 的角色去掉，然后看看怎么加上 spark thrift sevrver 的服务。另外一方面，退一步说，即使是 hive thrift server，也应该能正确的找到这些 hdfs 上的文件，这个坑我感觉应该是由于我们先在 hive 上建立了一部分的表结构并导入了数据之后，再回头去开通 HA 导致的。理想的做法应该是在部署期间配置好 HA，然后再建表结构和导入数据的。&lt;/p&gt;
&lt;p&gt;但是 cdh 官方本身对 spark 是打压的态度的（估计是为了推广自己的 impala），所以 cdh 页面上并不能配置 spark thrift server，我们的做法是把 hive thrift server 的角色去掉，然后手工启动 spark thrift server：&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;/opt/cloudera/parcels/CDH/lib/spark/sbin/start-thriftserver.sh  &lt;span class=&#34;se&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;--master yarn-client &lt;span class=&#34;se&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;--num-executors &lt;span class=&#34;m&#34;&gt;3&lt;/span&gt;  &lt;span class=&#34;se&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;--executor-cores &lt;span class=&#34;m&#34;&gt;3&lt;/span&gt; &lt;span class=&#34;se&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;--executor-memory 2G   &lt;span class=&#34;se&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;--jars /opt/cloudera/parcels/CDH-5.7.2-1.cdh5.7.2.p0.18/lib/hive/lib/mysql-connector-java-5.1.34.jar &lt;span class=&#34;se&#34;&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;--driver-java-options &lt;span class=&#34;s2&#34;&gt;&amp;#34;-Dlog4j.configuration=file:///opt/cloudera/parcels/CDH/lib/spark/conf/log4j.properties&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;但是这样做的恶心之处在于没有纳入 cdh 的统一管理，所以以后凡是需要用到 beeline 的时候，都需要先去看看 spark thrift server 启动了没，如果没有启动，还需要手工启动，如果过程中挂了，还需要手工去重启等等，也是烦人&lt;/p&gt;
&lt;p&gt;想要从 cdh 上删除 hive server2 的时候还报错，说至少需要有一个 hive server2，算了，也就保留，但是手工把他停止掉了，这样 10000 端口才能让出来给 spark thrift server 用。不过还是觉得这样恶心，索性把 hive thrift server 改成 10001，这样就不会有端口冲突的问题了。&lt;/p&gt;
&lt;p&gt;启动了 spark thrift sever 之后发现的另外一个事情是发现 yarn 上的可用内存不够了，整个集群才 12G，这个不符合道理啊，实际上一个节点有 32G 的内存，哪怕主节点不干活，也有 64G，起码应该配置 50G 给 yarn 才差不多，于是在页面上把这个值改成 20G。然后重新下发了配置。&lt;/p&gt;
&lt;p&gt;-&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;ndash;&lt;/p&gt;
&lt;p&gt;2016-11-30 00:05:07 追加&lt;/p&gt;
&lt;p&gt;在这里，https://github.com/mattshma/bigdata/issues/44，提到&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;HDFS设置HA后，hive报错如下：&lt;/p&gt;
&lt;p&gt;FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.IllegalArgumentException: Wrong FS: hdfs://namenode:8020/user/hive/warehouse/xxxxx.db/yyyyyyy, expected: hdfs://hadoopha) 在HA设置前的表，在设置HA后，无法删除。原因是hive metastore中关于namenode的信息没更新。操作如下：&lt;/p&gt;
&lt;p&gt;停Hive服务 在Hive metastore标签中， 点击Actions，执行Update Hive Metastore NameNodes 启动hive服务 本身在HA设置后，CDH也会说需要更新hive元数据的。&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;通过这个设置，可以解决 Wrong FS 的问题&lt;/p&gt;
&lt;p&gt;-&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;ndash;&lt;/p&gt;
&lt;p&gt;2016-12-4 11:38:54 一并的，上文提到的 namenode 转移的问题也可以解决了&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;历史评论&#34;&gt;历史评论
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;barretcharlie&lt;/strong&gt; (2018-04-10 14:56:35):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;找了半天，发现你这个才是真理，请问可以转载吗？&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;strong&gt;ZRJ&lt;/strong&gt; (2018-04-10 20:34:34):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;哈哈，有用就好，可以转载，注明出处就行&lt;/p&gt;
&lt;/blockquote&gt;
</description>
        </item>
        <item>
        <title>spark 读取 jdbc 的时候 where 过滤的问题</title>
        <link>https://blog.zrj.me/posts/2016-06-19-spark-%E8%AF%BB%E5%8F%96-jdbc-%E7%9A%84%E6%97%B6%E5%80%99-where-%E8%BF%87%E6%BB%A4%E7%9A%84%E9%97%AE%E9%A2%98/</link>
        <pubDate>Sun, 19 Jun 2016 14:24:01 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2016-06-19-spark-%E8%AF%BB%E5%8F%96-jdbc-%E7%9A%84%E6%97%B6%E5%80%99-where-%E8%BF%87%E6%BB%A4%E7%9A%84%E9%97%AE%E9%A2%98/</guid>
        <description>&lt;p&gt;一般来说，我们使用这样的方式让 spark 去读取 jdbc&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-java&#34; data-lang=&#34;java&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;n&#34;&gt;DataFrame&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;dataFrame&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;sqlContext&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;read&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;().&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;jdbc&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;(&lt;/span&gt;&lt;span class=&#34;n&#34;&gt;jdbcUrl&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;tableName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;n&#34;&gt;DBConfigUtil&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;na&#34;&gt;generateProperties&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;());&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;随之而来一个问题是，这样是读取整张表的，如果我们要读取某一部分的数据呢？&lt;/p&gt;
&lt;p&gt;自然的想法是 sqlContext 的 read 接口应该有参数可以控制，可惜没有，走读 spark 代码，看到这里，https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala#L103，和这里，https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala#L195，跟着到了这里，https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala#L1141，可以发现其实有一个 filter&lt;/p&gt;
&lt;p&gt;随之而来的一个疑问是，这个 filter 的动作，是发生在 fetch data 之前还是之后呢&lt;/p&gt;
&lt;p&gt;通过这里，http://stackoverflow.com/questions/6479107/how-to-enable-mysql-query-log，打开 query log，可以看到是在 fetch data 之前，作为 query sql 的一部分发送到 db server 的&lt;/p&gt;
&lt;p&gt;这个例子也再次印证了 spark 的 lazy load&lt;/p&gt;
</description>
        </item>
        <item>
        <title>spark 操作 hbase</title>
        <link>https://blog.zrj.me/posts/2016-04-21-spark-%E6%93%8D%E4%BD%9C-hbase/</link>
        <pubDate>Thu, 21 Apr 2016 09:51:24 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2016-04-21-spark-%E6%93%8D%E4%BD%9C-hbase/</guid>
        <description>&lt;p&gt;之前说到这个，&lt;a class=&#34;link&#34; href=&#34;http://zrj.me/archives/1621&#34;  title=&#34;spark 操作 mysql&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;spark 操作 mysql&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;=========================&lt;/p&gt;
&lt;p&gt;买一送一，hbase 的：&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://www.iteblog.com/archives/1051&#34;  title=&#34;http://www.iteblog.com/archives/1051&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark读取Hbase中的数据&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;======================&lt;/p&gt;
&lt;p&gt;关于操作 hbase，还有这两个文章，&lt;a class=&#34;link&#34; href=&#34;http://wuchong.me/blog/2015/04/06/spark-on-hbase-new-api/&#34;  title=&#34;http://wuchong.me/blog/2015/04/06/spark-on-hbase-new-api/&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark 下操作 HBase（1.0.0 新 API）&lt;/a&gt;，&lt;a class=&#34;link&#34; href=&#34;https://gist.github.com/wuchong/95630f80966d07d7453b&#34;  title=&#34;https://gist.github.com/wuchong/95630f80966d07d7453b&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://gist.github.com/wuchong/95630f80966d07d7453b&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;都是不错的，另外说一下，&amp;ldquo;org.apache.hbase&amp;rdquo; % &amp;ldquo;hbase-client&amp;rdquo; % &amp;ldquo;1.1.3&amp;rdquo; 这个包貌似是有问题的，这个版本有问题，反正之前在 pom.xml 下也是报错，需要&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-xml&#34; data-lang=&#34;xml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;			&lt;span class=&#34;nt&#34;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.hbase&lt;span class=&#34;nt&#34;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;			&lt;span class=&#34;nt&#34;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;hbase-client&lt;span class=&#34;nt&#34;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;			&lt;span class=&#34;nt&#34;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.1.3&lt;span class=&#34;nt&#34;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;			&lt;span class=&#34;nt&#34;&gt;&amp;lt;exclusions&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;				&lt;span class=&#34;nt&#34;&gt;&amp;lt;exclusion&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;					&lt;span class=&#34;nt&#34;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;jdk.tools&lt;span class=&#34;nt&#34;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;					&lt;span class=&#34;nt&#34;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;jdk.tools&lt;span class=&#34;nt&#34;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;				&lt;span class=&#34;nt&#34;&gt;&amp;lt;/exclusion&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;			&lt;span class=&#34;nt&#34;&gt;&amp;lt;/exclusions&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;		&lt;span class=&#34;nt&#34;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;才能行，但是傻逼的 sbt，配了 1.1.3 加上 &amp;ldquo;org.apache.hbase&amp;rdquo; % &amp;ldquo;hbase-client&amp;rdquo; % &amp;ldquo;1.1.3&amp;rdquo; exclude(&amp;ldquo;jdk.tools&amp;rdquo;, &amp;ldquo;jdk.tools&amp;rdquo;) 也还是出不来，本来是 hbase-client 的问题，但是关键是如果我之前没有在 pom.xml 中操作过的话，单靠 sbt 是完全不知道有这种坑的，以后估计还要被坑，而且毫无办法&lt;/p&gt;
&lt;p&gt;最终使用了教程中的 1.0.0 版本能跑通，虽然我的 hbase server 是 1.1.3 的，但愿 api 没有大变吧&lt;/p&gt;
&lt;p&gt;libraryDependencies += &amp;ldquo;org.apache.hbase&amp;rdquo; % &amp;ldquo;hbase-client&amp;rdquo; % &amp;ldquo;1.0.0&amp;rdquo;&lt;/p&gt;
&lt;p&gt;libraryDependencies += &amp;ldquo;org.apache.hbase&amp;rdquo; % &amp;ldquo;hbase-common&amp;rdquo; % &amp;ldquo;1.0.0&amp;rdquo;&lt;/p&gt;
&lt;p&gt;libraryDependencies += &amp;ldquo;org.apache.hbase&amp;rdquo; % &amp;ldquo;hbase-server&amp;rdquo; % &amp;ldquo;1.0.0&amp;rdquo;&lt;/p&gt;
&lt;p&gt;额外吐槽一下那个 libraryDependencies 语法，可以 ++= Seq( ，问题是里面还要逗号分隔，问题是我添来删去的，哪能帮你把逗号伺候的那么到位啊，真是自娱自乐，还有那个 %% % 语法，简洁是简洁了，看得不懂得人是一脸懵逼啊，这都啥字段和啥字段啊，pom 那样 xml 清清楚楚的不好吗，又不用人手敲，机器自动生成，是在不行，复制粘贴也成啊&lt;/p&gt;
&lt;p&gt;再次向 sbt 致以诚挚的问候&lt;/p&gt;
&lt;p&gt;=====================&lt;/p&gt;
&lt;p&gt;2016-4-21 09:56:48 如果遇到 noclassdef 问题，看这里，&lt;a class=&#34;link&#34; href=&#34;http://mangocool.com/detail_1_1437009997261.html&#34;  title=&#34;http://mangocool.com/detail_1_1437009997261.html&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;=====================&lt;/p&gt;
&lt;p&gt;2016-4-21 10:06:02 原文中用的是 libraryDependencies += &amp;ldquo;org.apache.spark&amp;rdquo; %% &amp;ldquo;spark-core&amp;rdquo; % &amp;ldquo;1.3.0&amp;rdquo;，我因为自己有了一个 &amp;ldquo;org.apache.spark&amp;rdquo; %% &amp;ldquo;spark-core&amp;rdquo; % &amp;ldquo;1.5.0&amp;rdquo; % &amp;ldquo;provided&amp;rdquo;,就想着把这个 1.3.0 去掉，结果就报错，class &amp;ldquo;javax.servlet.FilterRegistration&amp;rdquo;&amp;rsquo;s signer information does not match signer information of other classes in the same package，这里，&lt;a class=&#34;link&#34; href=&#34;https://issues.apache.org/jira/browse/SPARK-1693&#34;  title=&#34;https://issues.apache.org/jira/browse/SPARK-1693&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://issues.apache.org/jira/browse/SPARK-1693&lt;/a&gt;，也有讨论，还是乖乖把依赖加上吧&lt;/p&gt;
&lt;p&gt;=======================&lt;/p&gt;
&lt;p&gt;2016-4-21 14:17:34 &lt;a class=&#34;link&#34; href=&#34;http://mangocool.com/detail_1_1437009997261.html&#34;  title=&#34;http://mangocool.com/detail_1_1437009997261.html&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration&lt;/a&gt; 这里说可以加上 classpath，确实能解部分问题，但是死活报错 java.lang.ClassNotFoundException: org.apache.htrace.Trace，我把 jar 包放好了，指定了 classpath，也还是不行，后来才知道，踏马的原来有两个，libraryDependencies += &amp;ldquo;org.htrace&amp;rdquo; % &amp;ldquo;htrace-core&amp;rdquo; % &amp;ldquo;3.0.4&amp;rdquo; 这个名字不对，应该是 libraryDependencies += &amp;ldquo;org.apache.htrace&amp;rdquo; % &amp;ldquo;htrace-core&amp;rdquo; % &amp;ldquo;3.1.0-incubating&amp;rdquo;&lt;/p&gt;
&lt;p&gt;=========================&lt;/p&gt;
&lt;p&gt;2016-4-21 14:38:31 报错 Calculating region sizes for table 卡半天，仔细一看才发现 zk 连接到 2181 上去了，那是另外一个 zk，貌似没有读取 hbase-site.xml 里面的东西，赶紧代码里加上 conf.set(&amp;ldquo;hbase.zookeeper.property.clientPort&amp;rdquo;, &amp;ldquo;2222&amp;rdquo;)&lt;/p&gt;
</description>
        </item>
        <item>
        <title>spark sql</title>
        <link>https://blog.zrj.me/posts/2016-04-20-spark-sql/</link>
        <pubDate>Wed, 20 Apr 2016 22:25:18 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2016-04-20-spark-sql/</guid>
        <description>&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://spark.apache.org/docs/latest/sql-programming-guide.html&#34;  title=&#34;http://spark.apache.org/docs/latest/sql-programming-guide.html&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://spark.apache.org/docs/latest/sql-programming-guide.html&lt;/a&gt; 看这里好像可以把一个 csv 文件当做表来处理，那就好多了&lt;/p&gt;
&lt;p&gt;===================&lt;/p&gt;
&lt;p&gt;2016-4-20 22:43:18 如果你 assembly 出来的 jar 连不上 jdbc，参考这里，&lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/30221677/spark-sql-postgresql-jdbc-classpath-issues&#34;  title=&#34;http://stackoverflow.com/questions/30221677/spark-sql-postgresql-jdbc-classpath-issues&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/30221677/spark-sql-postgresql-jdbc-classpath-issues&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I was able to get it to work locally with these commands: sbt package and spark-submit &amp;ndash;driver-class-path ~/.m2/repository/org/postgresql/postgresql/9.3-1102-jdbc41/postgresql-9.3-1102-j‌​dbc41.jar target/scala-2.10/simple-project_2.10-1.0.jar&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;what a shit&lt;/p&gt;
</description>
        </item>
        <item>
        <title>spark 操作 mysql</title>
        <link>https://blog.zrj.me/posts/2016-04-19-spark-%E6%93%8D%E4%BD%9C-mysql/</link>
        <pubDate>Tue, 19 Apr 2016 18:55:41 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2016-04-19-spark-%E6%93%8D%E4%BD%9C-mysql/</guid>
        <description>&lt;p&gt;主要有两个思路，一个是旧的，spark 1.3 之前，自己动手丰衣足食，后来有了 spark sql，使用它的 dataframe，也是可以的&lt;/p&gt;
&lt;p&gt;=================================&lt;/p&gt;
&lt;p&gt;旧的有：&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://www.iteblog.com/archives/1113&#34;  title=&#34;http://www.iteblog.com/archives/1113&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark与Mysql(JdbcRDD)整合开发&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://www.iteblog.com/archives/1275&#34;  title=&#34;http://www.iteblog.com/archives/1275&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark将计算结果写入到Mysql中&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;这种 jdbc rdd 的，貌似是 scala 专属，&lt;a class=&#34;link&#34; href=&#34;http://www.infoobjects.com/spark-sql-jdbcrdd/&#34;  title=&#34;http://www.infoobjects.com/spark-sql-jdbcrdd/&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark SQL: JdbcRDD&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;==================================&lt;/p&gt;
&lt;p&gt;新的有：&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://www.iteblog.com/archives/1290&#34;  title=&#34;http://www.iteblog.com/archives/1290&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark RDD写入RMDB(Mysql)方法二&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;这一个也是用 dataframe，但是也是 scala 的，&lt;a class=&#34;link&#34; href=&#34;http://www.infoobjects.com/spark-connecting-to-a-jdbc-data-source-using-dataframes/&#34;  title=&#34;http://www.infoobjects.com/spark-connecting-to-a-jdbc-data-source-using-dataframes/&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark: Connecting to a jdbc data-source using dataframes&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://www.sparkexpert.com/2015/03/28/loading-database-data-into-spark-using-data-sources-api/&#34;  title=&#34;http://www.sparkexpert.com/2015/03/28/loading-database-data-into-spark-using-data-sources-api/&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Loading database data into Spark using Data Sources API&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://www.supergloo.com/fieldnotes/spark-sql-mysql-example-jdbc/&#34;  title=&#34;https://www.supergloo.com/fieldnotes/spark-sql-mysql-example-jdbc/&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark SQL MySQL Example With JDBC&lt;/a&gt;，这一个还带视频&lt;/p&gt;
&lt;p&gt;长文：&lt;a class=&#34;link&#34; href=&#34;https://www.percona.com/blog/2015/10/07/using-apache-spark-mysql-data-analysis/&#34;  title=&#34;https://www.percona.com/blog/2015/10/07/using-apache-spark-mysql-data-analysis/&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Using Apache Spark and MySQL for Data Analysis&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/27718382/how-to-work-with-mysql-db-and-apache-spark&#34;  title=&#34;http://stackoverflow.com/questions/27718382/how-to-work-with-mysql-db-and-apache-spark&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;How to work with MySQL DB and Apache spark&lt;/a&gt;，各种语言的样例&lt;/p&gt;
&lt;p&gt;==============================&lt;/p&gt;
&lt;p&gt;买一送一，hbase 的：&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://www.iteblog.com/archives/1051&#34;  title=&#34;http://www.iteblog.com/archives/1051&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark读取Hbase中的数据&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id=&#34;历史评论&#34;&gt;历史评论
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;spark 操作 hbase | ZRJ&lt;/strong&gt; (2016-04-21 09:52:30):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;[…] 之前说到这个，spark 操作 mysql […]&lt;/p&gt;
&lt;/blockquote&gt;
</description>
        </item>
        <item>
        <title>spark 算子理解和存储方式</title>
        <link>https://blog.zrj.me/posts/2016-04-19-spark-%E7%AE%97%E5%AD%90%E7%90%86%E8%A7%A3%E5%92%8C%E5%AD%98%E5%82%A8%E6%96%B9%E5%BC%8F/</link>
        <pubDate>Tue, 19 Apr 2016 10:24:24 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2016-04-19-spark-%E7%AE%97%E5%AD%90%E7%90%86%E8%A7%A3%E5%92%8C%E5%AD%98%E5%82%A8%E6%96%B9%E5%BC%8F/</guid>
        <description>&lt;p&gt;对 combineByKey 的理解，参看，&lt;a class=&#34;link&#34; href=&#34;http://luojinping.com/2016/01/22/%E5%88%9D%E5%AD%A6Spark/&#34;  title=&#34;http://luojinping.com/2016/01/22/%E5%88%9D%E5%AD%A6Spark/&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://luojinping.com/2016/01/22/%E5%88%9D%E5%AD%A6Spark/&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;combineByKey应用举例 求均值 val rdd = sc.textFile(&amp;ldquo;气象数据&amp;rdquo;) val rdd2 = rdd.map(x=&amp;gt;x.split(&amp;quot; &amp;ldquo;)).map(x =&amp;gt; (x(0).substring(&amp;ldquo;从年月日中提取年月&amp;rdquo;),x(1).toInt)) val createCombiner = (k: String, v: Int)=&amp;gt; { (v,1) } val mergeValue = (c:(Int, Int), v:Int) =&amp;gt; { (c._1 + v, c._2 + 1) } val mergeCombiners = (c1:(Int,Int),c2:(Int,Int))=&amp;gt;{ (c1._1 + c2._1, c1._2 + c2._2) } val vdd3 = vdd2.combineByKey( createCombiner, mergeValue, mergeCombiners ) rdd3.foreach(x=&amp;gt;println(x._1 + &amp;ldquo;: average tempreture is &amp;quot; + x._2._1/x._2._2)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;spark 和 hbase 的结合看这里，&lt;a class=&#34;link&#34; href=&#34;http://lxw1234.com/archives/2015/07/406.htm&#34;  title=&#34;http://lxw1234.com/archives/2015/07/406.htm&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark算子：RDD行动Action操作(7)–saveAsNewAPIHadoopFile、saveAsNewAPIHadoopDataset&lt;/a&gt;， &lt;a class=&#34;link&#34; href=&#34;http://lxw1234.com/archives/2015/07/404.htm&#34;  title=&#34;http://lxw1234.com/archives/2015/07/404.htm&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Spark算子：RDD行动Action操作(6)–saveAsHadoopFile、saveAsHadoopDataset&lt;/a&gt;，&lt;a class=&#34;link&#34; href=&#34;http://lxw1234.com/archives/2015/07/332.htm&#34;  title=&#34;http://lxw1234.com/archives/2015/07/332.htm&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;SparkSQL读取HBase数据，通过自定义外部数据源&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;另外，这一个系列文章也不错，&lt;a class=&#34;link&#34; href=&#34;http://lxw1234.com/archives/tag/spark%E7%AE%97%E5%AD%90&#34;  title=&#34;http://lxw1234.com/archives/tag/spark%E7%AE%97%E5%AD%90&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;标签：spark算子&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;这也不错，还保持着持续更新，&lt;a class=&#34;link&#34; href=&#34;http://www.lujinhong.com/&#34;  title=&#34;http://www.lujinhong.com/&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;lujinhong&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;各种算子的解释也看这里，样例有助于理解，&lt;a class=&#34;link&#34; href=&#34;http://www.lujinhong.com/spark%E5%B8%B8%E7%94%A8transformation%E5%92%8Caction.html&#34;  title=&#34;http://www.lujinhong.com/spark%E5%B8%B8%E7%94%A8transformation%E5%92%8Caction.html&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;spark常用transformation和action.html&lt;/a&gt;&lt;/p&gt;
</description>
        </item>
        <item>
        <title>spark 移动均值</title>
        <link>https://blog.zrj.me/posts/2016-04-18-spark-%E7%A7%BB%E5%8A%A8%E5%9D%87%E5%80%BC/</link>
        <pubDate>Mon, 18 Apr 2016 19:43:13 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2016-04-18-spark-%E7%A7%BB%E5%8A%A8%E5%9D%87%E5%80%BC/</guid>
        <description>&lt;p&gt;想要在 spark 上算移动均值，可以参考这个&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/23402303/apache-spark-moving-average&#34;  title=&#34;http://stackoverflow.com/questions/23402303/apache-spark-moving-average&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/23402303/apache-spark-moving-average&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You can use the sliding function from MLLIB which probably does the same thing as Daniel&amp;rsquo;s answer. You will have to sort the data by time before using the sliding function.&lt;/p&gt;
&lt;p&gt;import org.apache.spark.mllib.rdd.RDDFunctions._&lt;/p&gt;
&lt;p&gt;sc.parallelize(1 to 100, 10) .sliding(3) .map(curSlice =&amp;gt; (curSlice.sum / curSlice.size)) .collect()&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/31965615/moving-average-in-spark-java&#34;  title=&#34;http://stackoverflow.com/questions/31965615/moving-average-in-spark-java&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/31965615/moving-average-in-spark-java&lt;/a&gt;，改写成 java 就繁琐很多。&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I took the question you were referring and struggled for a couple of hours in order to translate the Scala code into Java:&lt;/p&gt;
&lt;p&gt;// Read a file containing the Stock Quotations // You can also paralelize a collection of objects to create a RDD JavaRDD linesRDD = sc.textFile(&amp;ldquo;some sample file containing stock prices&amp;rdquo;);&lt;/p&gt;
&lt;p&gt;// Convert the lines into our business objects JavaRDD quotationsRDD = linesRDD.flatMap(new ConvertLineToStockQuotation());&lt;/p&gt;
&lt;p&gt;// We need these two objects in order to use the MLLib RDDFunctions object ClassTag classTag = scala.reflect.ClassManifestFactory.fromClass(StockQuotation.class); RDD rdd = JavaRDD.toRDD(quotationsRDD);&lt;/p&gt;
&lt;p&gt;// Instantiate a RDDFunctions object to work with RDDFunctions rddFs = RDDFunctions.fromRDD(rdd, classTag);&lt;/p&gt;
&lt;p&gt;// This applies the sliding function and return the (DATE,SMA) tuple JavaPairRDD smaPerDate = rddFs.sliding(slidingWindow).toJavaRDD().mapToPair(new MovingAvgByDateFunction()); List&amp;gt; smaPerDateList = smaPerDate.collect(); Then you have to use a new Function Class to do the actual calculation of each data window:&lt;/p&gt;
&lt;p&gt;public class MovingAvgByDateFunction implements PairFunction {&lt;/p&gt;
&lt;p&gt;/** * */ private static final long serialVersionUID = 9220435667459839141L;&lt;/p&gt;
&lt;p&gt;@Override public Tuple2 call(Object t) throws Exception {&lt;/p&gt;
&lt;p&gt;StockQuotation[] stocks = (StockQuotation[]) t; List stockList = Arrays.asList(stocks);&lt;/p&gt;
&lt;p&gt;Double result = stockList.stream().collect(Collectors.summingDouble(new ToDoubleFunction() {&lt;/p&gt;
&lt;p&gt;@Override public double applyAsDouble(StockQuotation value) { return value.getValue(); } }));&lt;/p&gt;
&lt;p&gt;result = result / stockList.size();&lt;/p&gt;
&lt;p&gt;return new Tuple2(stockList.get(0).getTimestamp(),result); } } If you want more detail on this, I wrote about Simple Moving Averages here: &lt;a class=&#34;link&#34; href=&#34;https://t.co/gmWltdANd3&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://t.co/gmWltdANd3&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://victorferrerjava.blogspot.com.es/2016/01/calculating-moving-averages-with-spark.html&#34;  title=&#34;http://victorferrerjava.blogspot.com.es/2016/01/calculating-moving-averages-with-spark.html&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://victorferrerjava.blogspot.com.es/2016/01/calculating-moving-averages-with-spark.html&lt;/a&gt;，这里有上文提到的博客，图文并茂&lt;/p&gt;
</description>
        </item>
        <item>
        <title>spark Task not serializable</title>
        <link>https://blog.zrj.me/posts/2016-04-18-spark-task-not-serializable/</link>
        <pubDate>Mon, 18 Apr 2016 13:24:55 +0800</pubDate>
        
        <guid>https://blog.zrj.me/posts/2016-04-18-spark-task-not-serializable/</guid>
        <description>&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/23050067/spark-task-not-serializable-how-to-work-with-complex-map-closures-that-call-o&#34;  title=&#34;http://stackoverflow.com/questions/23050067/spark-task-not-serializable-how-to-work-with-complex-map-closures-that-call-o&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/23050067/spark-task-not-serializable-how-to-work-with-complex-map-closures-that-call-o&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In case of using Java API you should avoid anonymous class when passing to the mapping function closure. Instead of doing map( new Function) you need a class that extends your function and pass that to the map(..) See: &lt;a class=&#34;link&#34; href=&#34;https://yanago.wordpress.com/2015/03/21/apache-spark/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://yanago.wordpress.com/2015/03/21/apache-spark/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://www.bubuko.com/infodetail-670338.html&#34;  title=&#34;http://www.bubuko.com/infodetail-670338.html&#34;
     target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://www.bubuko.com/infodetail-670338.html&lt;/a&gt;，这个没有直接作用，但是阐述有助于理解&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;出现“org.apache.spark.SparkException: Task not serializable&amp;quot;这个错误，一般是因为在map、filter等的参数使用了外部的变量，但是这个变量不能序列化。特别是当引用了某个类（经常是当前类）的成员函数或变量时，会导致这个类的所有成员（整个类）都需要支持序列化。解决这个问题最常用的方法有：&lt;/p&gt;
&lt;p&gt;如果可以，将依赖的变量放到map、filter等的参数内部定义。这样就可以使用不支持序列化的类； 如果可以，将依赖的变量独立放到一个小的class中，让这个class支持序列化；这样做可以减少网络传输量，提高效率； 如果可以，将被依赖的类中不能序列化的部分使用transient关键字修饰，告诉编译器它不需要序列化。 将引用的类做成可序列化的。 以下这两个没试过。。&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;-&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;ndash;&lt;/p&gt;
&lt;p&gt;2016-6-18 15:37:50&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://webcache.googleusercontent.com/search?q=cache:uf9YSQWBvDkJ:https://mail-archives.apache.org/mod&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://webcache.googleusercontent.com/search?q=cache:uf9YSQWBvDkJ:https://mail-archives.apache.org/mod&lt;/a&gt;_mbox/spark-user/201312.mbox/%253CCALAO9hwHovNJPrcGU-skD_A5YPOYpSfaJZCS1jpRYBYGccX8DA%40mail.gmail.com%253E+&amp;amp;cd=4&amp;amp;hl=zh-CN&amp;amp;ct=clnk&amp;amp;gl=us&lt;/p&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;http://stackoverflow.com/questions/24046744/javaspark-org-apache-spark-sparkexception-job-aborted-task-not-serializable&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;http://stackoverflow.com/questions/24046744/javaspark-org-apache-spark-sparkexception-job-aborted-task-not-serializable&lt;/a&gt;&lt;/p&gt;
</description>
        </item>
        
    </channel>
</rss>
