這篇文章主要介紹了Scala中如何實(shí)現(xiàn)文件讀取、寫入、控制臺操作,具有一定借鑒價(jià)值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
創(chuàng)新互聯(lián)公司是一家專業(yè)提供友誼企業(yè)網(wǎng)站建設(shè),專注與網(wǎng)站建設(shè)、成都做網(wǎng)站、H5場景定制、小程序制作等業(yè)務(wù)。10年已為友誼眾多企業(yè)、政府機(jī)構(gòu)等服務(wù)。創(chuàng)新互聯(lián)專業(yè)網(wǎng)絡(luò)公司優(yōu)惠進(jìn)行中。
Scala文件讀取
E盤根目錄下scalaIO.txt文件內(nèi)容如下:
文件讀取示例代碼:
//文件讀取 val file=Source.fromFile("E:\\scalaIO.txt") for(line <- file.getLines) { println(line) } file.close
說明1:file=Source.fromFile(“E:\scalaIO.txt”),其中Source中的fromFile()方法源自 import scala.io.Source源碼包,源碼如下圖:
file.getLines(),返回的是一個(gè)迭代器-Iterator;源碼如下:(scala.io)
Scala 網(wǎng)絡(luò)資源讀取
//網(wǎng)絡(luò)資源讀取 val webFile=Source.fromURL("http://spark.apache.org") webFile.foreach(print) webFile.close()
fromURL()方法源碼如下:
/** same as fromURL(new URL(s)) */ def fromURL(s: String)(implicit codec: Codec): BufferedSource = fromURL(new URL(s))(codec)
讀取的網(wǎng)絡(luò)資源資源內(nèi)容如下:
Apache Spark™ - Lightning-Fast Cluster Computing Process finished with exit code 0Latest News
- Submission is open for Spark Summit East 2016 (Oct 14, 2015)
- Spark 1.5.1 released (Oct 02, 2015)
- Spark 1.5.0 released (Sep 09, 2015)
- Spark Summit Europe agenda posted (Sep 07, 2015)
Apache Spark™ is a fast and general engine for large-scale data processing.Speed
Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.
Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing.
Logistic regression in Hadoop and SparkEase of Use
Write applications quickly in Java, Scala, Python, R.
Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python and R shells.
text_file = spark.textFile("hdfs://...")
text_file.flatMap(lambda line: line.split())
.map(lambda word: (word, 1))
.reduceByKey(lambda a, b: a+b)Word count in Spark's Python APIGenerality
Combine SQL, streaming, and complex analytics.
Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.
Runs Everywhere
Spark runs on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, and S3.
You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, or on Apache Mesos. Access data in HBase, Tachyon, and any Hadoop data source.
Community
Spark is used at a wide range of organizations to process large datasets. You can find example use cases at the Spark Summit conference, or on the Powered By page.
There are many ways to reach the community:
- Use the mailing lists to ask questions.
- In-person events include the Bay Area Spark meetup and Spark Summit.
- We use JIRA for issue tracking.
Contributors
Apache Spark is built by a wide set of developers from over 200 companies. Since 2009, more than 800 developers have contributed to Spark!
The project's committers come from 16 organizations.
If you'd like to participate in Spark, or contribute to the libraries on top of it, learn how to contribute.
Getting Started
Learning Spark is easy whether you come from a Java or Python background:
- Download the latest release — you can run Spark locally on your laptop.
- Read the quick start guide.
- Spark Summit 2014 contained free training videos and exercises.
- Learn how to deploy Spark on a cluster.
//網(wǎng)絡(luò)資源讀取 val webFile=Source.fromURL("http://www.baidu.com/") webFile.foreach(print) webFile.close()
讀取中文資源站點(diǎn),出現(xiàn)編碼混亂問題如下:(解決辦法自行解決,本文不是重點(diǎn))
Exception in thread "main" java.nio.charset.MalformedInputException: Input length = 1
感謝你能夠認(rèn)真閱讀完這篇文章,希望小編分享的“Scala中如何實(shí)現(xiàn)文件讀取、寫入、控制臺操作”這篇文章對大家有幫助,同時(shí)也希望大家多多支持創(chuàng)新互聯(lián),關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道,更多相關(guān)知識等著你來學(xué)習(xí)!