package test;
創(chuàng)新互聯(lián)建站是一家企業(yè)級云計(jì)算解決方案提供商,超15年IDC數(shù)據(jù)中心運(yùn)營經(jīng)驗(yàn)。主營GPU顯卡服務(wù)器,站群服務(wù)器,成都服務(wù)器托管,海外高防服務(wù)器,成都機(jī)柜租用,動(dòng)態(tài)撥號VPS,海外云手機(jī),海外云服務(wù)器,海外服務(wù)器租用托管等。
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
public class HttpTest {
private String u;
private String encoding;
public static void main(String[] args) throws Exception {
HttpTest client = new HttpTest("", "UTF-8");
client.run();
}
public HttpTest(String u, String encoding) {
this.u = u;
this.encoding = encoding;
}
public void run() throws Exception {
URL url = new URL(u);// 根據(jù)鏈接(字符串格式),生成一個(gè)URL對象
HttpURLConnection urlConnection = (HttpURLConnection) url
.openConnection();// 打開URL
BufferedReader reader = new BufferedReader(new InputStreamReader(
urlConnection.getInputStream(), encoding));// 得到輸入流,即獲得了網(wǎng)頁的內(nèi)容
String line; // 讀取輸入流的數(shù)據(jù),并顯示
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
}
}
根據(jù)具體問題類型,進(jìn)行步驟拆解/原因原理分析/內(nèi)容拓展等。
具體步驟如下:/導(dǎo)致這種情況的原因主要是……
網(wǎng)絡(luò)爬蟲是一個(gè)自動(dòng)提取網(wǎng)頁的程序,它為搜索引擎從萬維網(wǎng)上下載網(wǎng)頁,是搜索引擎的重要組成。\x0d\x0a傳統(tǒng)爬蟲從一個(gè)或若干初始網(wǎng)頁的URL開始,獲得初始網(wǎng)頁上的URL,在抓取網(wǎng)頁的過程中,不斷從當(dāng)前頁面上抽取新的URL放入隊(duì)列,直到滿足系統(tǒng)的一定停止條件。對于垂直搜索來說,聚焦爬蟲,即有針對性地爬取特定主題網(wǎng)頁的爬蟲,更為適合。\x0d\x0a\x0d\x0a以下是一個(gè)使用java實(shí)現(xiàn)的簡單爬蟲核心代碼:\x0d\x0apublic void crawl() throws Throwable { \x0d\x0a while (continueCrawling()) { \x0d\x0a CrawlerUrl url = getNextUrl(); //獲取待爬取隊(duì)列中的下一個(gè)URL \x0d\x0a if (url != null) { \x0d\x0a printCrawlInfo(); \x0d\x0a String content = getContent(url); //獲取URL的文本信息 \x0d\x0a \x0d\x0a //聚焦爬蟲只爬取與主題內(nèi)容相關(guān)的網(wǎng)頁,這里采用正則匹配簡單處理 \x0d\x0a if (isContentRelevant(content, this.regexpSearchPattern)) { \x0d\x0a saveContent(url, content); //保存網(wǎng)頁至本地 \x0d\x0a \x0d\x0a //獲取網(wǎng)頁內(nèi)容中的鏈接,并放入待爬取隊(duì)列中 \x0d\x0a Collection urlStrings = extractUrls(content, url); \x0d\x0a addUrlsToUrlQueue(url, urlStrings); \x0d\x0a } else { \x0d\x0a System.out.println(url + " is not relevant ignoring ..."); \x0d\x0a } \x0d\x0a \x0d\x0a //延時(shí)防止被對方屏蔽 \x0d\x0a Thread.sleep(this.delayBetweenUrls); \x0d\x0a } \x0d\x0a } \x0d\x0a closeOutputStream(); \x0d\x0a}\x0d\x0aprivate CrawlerUrl getNextUrl() throws Throwable { \x0d\x0a CrawlerUrl nextUrl = null; \x0d\x0a while ((nextUrl == null) (!urlQueue.isEmpty())) { \x0d\x0a CrawlerUrl crawlerUrl = this.urlQueue.remove(); \x0d\x0a //doWeHavePermissionToVisit:是否有權(quán)限訪問該URL,友好的爬蟲會(huì)根據(jù)網(wǎng)站提供的"Robot.txt"中配置的規(guī)則進(jìn)行爬取 \x0d\x0a //isUrlAlreadyVisited:URL是否訪問過,大型的搜索引擎往往采用BloomFilter進(jìn)行排重,這里簡單使用HashMap \x0d\x0a //isDepthAcceptable:是否達(dá)到指定的深度上限。爬蟲一般采取廣度優(yōu)先的方式。一些網(wǎng)站會(huì)構(gòu)建爬蟲陷阱(自動(dòng)生成一些無效鏈接使爬蟲陷入死循環(huán)),采用深度限制加以避免 \x0d\x0a if (doWeHavePermissionToVisit(crawlerUrl) \x0d\x0a (!isUrlAlreadyVisited(crawlerUrl)) \x0d\x0a isDepthAcceptable(crawlerUrl)) { \x0d\x0a nextUrl = crawlerUrl; \x0d\x0a // System.out.println("Next url to be visited is " + nextUrl); \x0d\x0a } \x0d\x0a } \x0d\x0a return nextUrl; \x0d\x0a}\x0d\x0aprivate String getContent(CrawlerUrl url) throws Throwable { \x0d\x0a //HttpClient4.1的調(diào)用與之前的方式不同 \x0d\x0a HttpClient client = new DefaultHttpClient(); \x0d\x0a HttpGet httpGet = new HttpGet(url.getUrlString()); \x0d\x0a StringBuffer strBuf = new StringBuffer(); \x0d\x0a HttpResponse response = client.execute(httpGet); \x0d\x0a if (HttpStatus.SC_OK == response.getStatusLine().getStatusCode()) { \x0d\x0a HttpEntity entity = response.getEntity(); \x0d\x0a if (entity != null) { \x0d\x0a BufferedReader reader = new BufferedReader( \x0d\x0a new InputStreamReader(entity.getContent(), "UTF-8")); \x0d\x0a String line = null; \x0d\x0a if (entity.getContentLength() 0) { \x0d\x0a strBuf = new StringBuffer((int) entity.getContentLength()); \x0d\x0a while ((line = reader.readLine()) != null) { \x0d\x0a strBuf.append(line); \x0d\x0a } \x0d\x0a } \x0d\x0a } \x0d\x0a if (entity != null) { \x0d\x0a nsumeContent(); \x0d\x0a } \x0d\x0a } \x0d\x0a //將url標(biāo)記為已訪問 \x0d\x0a markUrlAsVisited(url); \x0d\x0a return strBuf.toString(); \x0d\x0a}\x0d\x0apublic static boolean isContentRelevant(String content, \x0d\x0aPattern regexpPattern) { \x0d\x0a boolean retValue = false; \x0d\x0a if (content != null) { \x0d\x0a //是否符合正則表達(dá)式的條件 \x0d\x0a Matcher m = regexpPattern.matcher(content.toLowerCase()); \x0d\x0a retValue = m.find(); \x0d\x0a } \x0d\x0a return retValue; \x0d\x0a}\x0d\x0apublic List extractUrls(String text, CrawlerUrl crawlerUrl) { \x0d\x0a Map urlMap = new HashMap(); \x0d\x0a extractHttpUrls(urlMap, text); \x0d\x0a extractRelativeUrls(urlMap, text, crawlerUrl); \x0d\x0a return new ArrayList(urlMap.keySet()); \x0d\x0a} \x0d\x0aprivate void extractHttpUrls(Map urlMap, String text) { \x0d\x0a Matcher m = (text); \x0d\x0a while (m.find()) { \x0d\x0a String url = m.group(); \x0d\x0a String[] terms = url.split("a href=\""); \x0d\x0a for (String term : terms) { \x0d\x0a // System.out.println("Term = " + term); \x0d\x0a if (term.startsWith("http")) { \x0d\x0a int index = term.indexOf("\""); \x0d\x0a if (index 0) { \x0d\x0a term = term.substring(0, index); \x0d\x0a } \x0d\x0a urlMap.put(term, term); \x0d\x0a System.out.println("Hyperlink: " + term); \x0d\x0a } \x0d\x0a } \x0d\x0a } \x0d\x0a} \x0d\x0aprivate void extractRelativeUrls(Map urlMap, String text, \x0d\x0a CrawlerUrl crawlerUrl) { \x0d\x0a Matcher m = relativeRegexp.matcher(text); \x0d\x0a URL textURL = crawlerUrl.getURL(); \x0d\x0a String host = textURL.getHost(); \x0d\x0a while (m.find()) { \x0d\x0a String url = m.group(); \x0d\x0a String[] terms = url.split("a href=\""); \x0d\x0a for (String term : terms) { \x0d\x0a if (term.startsWith("/")) { \x0d\x0a int index = term.indexOf("\""); \x0d\x0a if (index 0) { \x0d\x0a term = term.substring(0, index); \x0d\x0a } \x0d\x0a String s = //" + host + term; \x0d\x0a urlMap.put(s, s); \x0d\x0a System.out.println("Relative url: " + s); \x0d\x0a } \x0d\x0a } \x0d\x0a } \x0d\x0a \x0d\x0a}\x0d\x0apublic static void main(String[] args) { \x0d\x0a try { \x0d\x0a String url = ""; \x0d\x0a Queue urlQueue = new LinkedList(); \x0d\x0a String regexp = "java"; \x0d\x0a urlQueue.add(new CrawlerUrl(url, 0)); \x0d\x0a NaiveCrawler crawler = new NaiveCrawler(urlQueue, 100, 5, 1000L, \x0d\x0a regexp); \x0d\x0a // boolean allowCrawl = crawler.areWeAllowedToVisit(url); \x0d\x0a // System.out.println("Allowed to crawl: " + url + " " + \x0d\x0a // allowCrawl); \x0d\x0a crawler.crawl(); \x0d\x0a } catch (Throwable t) { \x0d\x0a System.out.println(t.toString()); \x0d\x0a t.printStackTrace(); \x0d\x0a } \x0d\x0a}
1. 你可以選擇用Java代碼來找到整個(gè)網(wǎng)頁的html代碼,如下
(注意在處理網(wǎng)頁方面的內(nèi)容時(shí),需要導(dǎo)入htmlparser包來支持)
import org.htmlparser.util.ParserException;
import org.htmlparser.visitors.HtmlPage;
import org.htmlparser.Parser;
import org.htmlparser.filters.HasAttributeFilter;
import org.htmlparser.util.NodeList;
public class htmlmover {
public static void main(String[] args){
NodeList rt= getNodeList("");
System.out.println(rt.toHtml());
}
public static NodeList getNodeList(String url){
Parser parser = null;
HtmlPage visitor = null;
try {
parser = new Parser(url);
parser.setEncoding("GBK");
visitor = new HtmlPage(parser);
parser.visitAllNodesWith(visitor);
} catch (ParserException e) {
e.printStackTrace();
}
NodeList nodeList = visitor.getBody();
return nodeList;
}
}
以上代碼,public static NodeList getNodeList(String url) 為主體
傳入需要分析網(wǎng)頁的 url(String類型),返回值是網(wǎng)頁Html節(jié)點(diǎn)List(Nodelist類型)
這個(gè)方法我沒有什么要說的,剛開始的時(shí)候沒看懂(沒接觸過),后來用了幾次也懂點(diǎn)皮毛了
注意: parser.setEncoding("GBK"); 可能你的工程編碼格式是UTF-8,有錯(cuò)誤的話需要改動(dòng)
運(yùn)行該程序
2.通過瀏覽器工具直接查看 IE是按F12 (剛開始沒發(fā)現(xiàn)這個(gè)方法,于是傻乎乎地找上面的代碼)
分析你所獲得的html代碼讓人眼花繚亂,不要緊,找到自己需要趴取的內(nèi)容,找到它上下文有特征的節(jié)點(diǎn)
!--中行牌價(jià) 開始--
div id="sw01_con1"
table width="655" border="0" cellspacing="0" cellpadding="0" class="hgtab"
thead
tr
th width="85" align="center" class="th_l"交易幣種/th
th width="80" align="center"交易單位/th
th width="130" align="center"現(xiàn)價(jià)(人民幣)/th
th width="80" align="center"賣出價(jià)/th
th width="100" align="center"現(xiàn)匯買入價(jià)/th
th width="95" align="center"現(xiàn)鈔買入價(jià)/th
/tr
/thead
tbody
tr align="center"
td 英鎊/td
td100/td
td992.7/td
td1001.24/td
td993.26/td
td class="no"962.6/td
/tr
tr align="center" bgcolor="#f2f3f4"
td 港幣/td
td100/td
td81.54/td
td82.13/td
td81.81/td
td class="no"81.16/td
/tr
tr align="center"
td 美元/td
td100/td
td635.49/td
td639.35/td
td636.8/td
td class="no"631.69/td
/tr
tr align="center" bgcolor="#f2f3f4"
td 瑞士法郎/td
td100/td
td710.89/td
td707.78/td
td702.14/td
td class="no"680.46/td
/tr
tr align="center"
td 新加坡元/td
td100/td
td492.45/td
td490.17/td
td486.27/td
td class="no"471.25/td
/tr
tr align="center" bgcolor="#f2f3f4"
td 瑞典克朗/td
td100/td
td93.66/td
td93.79/td
td93.04/td
td class="no"90.17/td
/tr
tr align="center"
td 丹麥克朗/td
td100/td
td116.43/td
td115.59/td
td114.67/td
td class="no"111.13/td
/tr
tr align="center" bgcolor="#f2f3f4"
td 挪威克朗/td
td100/td
td110.01/td
td109.6/td
td108.73/td
td class="no"105.37/td
/tr
!--{2011-10-01 23:16:00}--
/tbody
/table
/div
!--中行牌價(jià) 結(jié)束--
大家可以看到這是一段很有規(guī)律,書寫非常規(guī)范的Html代碼(這只是第一部分,中行牌價(jià),可以想像,接下來還會(huì)有并列的 相似的3部分)
大家想截取這些節(jié)點(diǎn)中的數(shù)據(jù)
以下代碼仍需導(dǎo)入htmlparser Java支持包
import java.util.ArrayList;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.htmlparser.Node;
import org.htmlparser.NodeFilter;
import org.htmlparser.Parser;
import org.htmlparser.util.NodeList;
import org.htmlparser.util.ParserException;
public class Currencyrate {
public static void main(String[] args){
String url="";
ArrayListString rt= getNodeList(url);
for (int i = 0; i rt.size(); i++){
System.out.println(rt.get(i));
}
}
public static ArrayListString getNodeList(String url){
final ArrayListString result=new ArrayListString();
Parser parser = null;
NodeList nodeList=null;
try {
parser = new Parser(url);
parser.setEncoding("GBK");
nodeList = parser.parse(
new NodeFilter(){
@Override
public boolean accept(Node node){
Node need=node;
if(getStringsByRegex(node.getText())){
for(int i=0;i6;i++){
result.add(need.toPlainTextString()); need=need.getPreviousSibling().getPreviousSibling();
}
return true;
}
return false;
}
}
);
}catch (ParserException e) {
e.printStackTrace();
}
return result;
}
public static boolean getStringsByRegex(String txt) {
String regex="td class=\"no\"";
Pattern p = Pattern.compile(regex);
Matcher m = p.matcher(txt);
if (m.find()){
return true;
}
return false;
}
}
廢話不多說,
public static ArrayListString getNodeList(String url) 主要方法
parser.setEncoding("GBK"); 需要注意,代碼編碼格式
nodeList = parser.parse(
new NodeFilter(){
@Override
public boolean accept(Node node){
}
}
);
nodelist是html節(jié)點(diǎn)的列表,現(xiàn)在使用NodeFilter ( 節(jié)點(diǎn)過濾器 )實(shí)例, 重載NodeFilter類中的accept()方法
在parser這個(gè)Parser類訪問整個(gè)html頁面的時(shí)候,每遇到一個(gè)html節(jié)點(diǎn),就會(huì)訪問這個(gè)
accept()方法,返回True的話就會(huì)將這個(gè)節(jié)點(diǎn) 放進(jìn)nodelist中,否則就不會(huì)將這個(gè)節(jié)點(diǎn)放進(jìn)去。這個(gè)就是NodeFilter功能。
代碼段一獲取整個(gè)html頁面時(shí)候 parser.visitAllNodesWith(visitor); 就是獲取所有節(jié)點(diǎn)
所以現(xiàn)在我們要趴取網(wǎng)頁上的內(nèi)容,只要告訴accept()這個(gè)方法,哪些節(jié)點(diǎn)要放進(jìn)nodelist去,即 遇到哪些節(jié)點(diǎn)需要返回true。
于是
public boolean accept(Node node){
Node need=node;
if(getStringsByRegex(node.getText())){
for(int i=0;i6;i++){
result.add(need.toPlainTextString()); need=need.getPreviousSibling().getPreviousSibling();
}
return true;
}
return false;
}
Parser類在遇到節(jié)點(diǎn),就把這個(gè)節(jié)點(diǎn)拿過去問accept(),于是accept()方法分析,如果滿足getStringsByRegex(node.getText())就要了
接下來分析getStringsByRegex(),只剩下最后一步了,大家堅(jiān)持?。?/p>
String regex="td class=\"no\"";
Pattern p = Pattern.compile(regex);
Matcher m = p.matcher(txt);
if (m.find()){
return true;
}
return false;
}
大家可以發(fā)現(xiàn)我們索要的每一段都是
tr align="center"
td 英鎊/td
td100/td
td992.7/td
td1001.24/td
td993.26/td
td class="no"962.6/td
/tr
所以只要找到td class="no"這個(gè)節(jié)點(diǎn)就行了,我們用正則表達(dá)式去比較
String regex="td class=\"no\""; 這個(gè)是比較標(biāo)準(zhǔn)(正則表達(dá)式 td class=”no” 其中兩個(gè)引號需要作為轉(zhuǎn)義字符來表示 成\“ )
變量txt是我們傳過去的需要比較的節(jié)點(diǎn)的node.getText(),如果符合的話m.find就是true,于是getStringsByRegex()返回true,說明這個(gè)節(jié)點(diǎn)就是我們所需要的哪些節(jié)點(diǎn),于是
for(int i=0;i6;i++){
result.add(need.toPlainTextString()); need=need.getPreviousSibling().getPreviousSibling();
}
每一段html,6個(gè)為一組,先是962.6,然后是993.26,1001.24,992.7,100,英鎊分別被add進(jìn)result這個(gè)ArrayListString中去,返回,這個(gè)ArrayList裝的就是我們需要抓取的數(shù)據(jù)
大家可以把我們所獲得的String數(shù)據(jù)數(shù)出來試試看,是不是我們需要的順序,main()函數(shù)獲得ArrayListString,就可以顯示到我們所需要的Java widget上去了