Spark下的詞頻計(jì)數(shù)是怎樣進(jìn)行的,針對(duì)這個(gè)問(wèn)題,這篇文章詳細(xì)介紹了相對(duì)應(yīng)的分析和解答,希望可以幫助更多想解決這個(gè)問(wèn)題的小伙伴找到更簡(jiǎn)單易行的方法。
專注于為中小企業(yè)提供網(wǎng)站設(shè)計(jì)、成都網(wǎng)站設(shè)計(jì)服務(wù),電腦端+手機(jī)端+微信端的三站合一,更高效的管理,為中小企業(yè)疊彩免費(fèi)做網(wǎng)站提供優(yōu)質(zhì)的服務(wù)。我們立足成都,凝聚了一批互聯(lián)網(wǎng)行業(yè)人才,有力地推動(dòng)了超過(guò)千家企業(yè)的穩(wěn)健成長(zhǎng),幫助中小企業(yè)通過(guò)網(wǎng)站建設(shè)實(shí)現(xiàn)規(guī)模擴(kuò)充和轉(zhuǎn)變。
下載 Spark 1.52 Pre-Built for hadoop 2.6 http://spark.apache.org/downloads.html。還需要預(yù)裝 Java,Scala 環(huán)境。
將 Spark 目錄文件放到 /opt/spark-hadoop 下,運(yùn)行 ./spark-shell 會(huì)出現(xiàn)連接 Scale 窗口;運(yùn)行 ./python/pyspark 會(huì)出現(xiàn)連接 Python 的窗口。這表示安裝成功。
將 python 目錄下 pyspark 復(fù)制到 Python 安裝目錄 /usr/local/lib/python2.7/dist-packages。這樣才可以在程序中導(dǎo)入pyspark 庫(kù)。
#!/usr/bin/python # -*- coding:utf-8 -*- from pyspark import SparkConf, SparkContext import os os.environ["SPARK_HOME"] = "/opt/spark-hadoop" APP_NAME = "TopKeyword" if __name__ == "__main__": logFile = "./README.md" sc = SparkContext("local", "Simple App") logData = sc.textFile(logFile).cache() numAs = logData.filter(lambda s: 'a' in s).count() numBs = logData.filter(lambda s: 'b' in s).count() print("Lines with a: %i, lines with b: %i" % (numAs, numBs))
打印結(jié)果
Lines with a: 3, lines with b: 2
#!/usr/bin/python # -*- coding:utf-8 -*- from pyspark import SparkConf, SparkContext import os import sys reload(sys) sys.setdefaultencoding("utf-8") os.environ["SPARK_HOME"] = "/opt/spark-hadoop" def divide_word(): word_txt = open('question_word.txt', 'a') with open('question_title.txt', 'r') as question_txt: question = question_txt.readline() while(question): seg_list = jieba.cut(question, cut_all=False) line = " ".join(seg_list) word_txt.write(line) question = question_txt.readline() question_txt.close() word_txt.close() def word_count(): sc = SparkContext("local", "WordCount") text_file = sc.textFile("./question_word.txt").cache() counts = text_file.flatMap(lambda line: line.split(" ")) \ .map(lambda word: (word, 1)) \ .reduceByKey(lambda a, b: a + b) counts.saveAsTextFile("./wordcount_result.txt") if __name__ == "__main__" word_count()
關(guān)于Spark下的詞頻計(jì)數(shù)是怎樣進(jìn)行的問(wèn)題的解答就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,如果你還有很多疑惑沒(méi)有解開(kāi),可以關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道了解更多相關(guān)知識(shí)。