今天閑來無聊無意間看到了百度股票,就想著用python爬一下數(shù)據(jù),于是就找到了東方財(cái)經(jīng)網(wǎng),結(jié)合這兩個(gè)網(wǎng)站,寫了一個(gè)小爬蟲,數(shù)據(jù)保存在文件中,比較簡(jiǎn)單的示例,就當(dāng)做用來練習(xí)正則表達(dá)式和BeautifulSoupl了。
創(chuàng)新互聯(lián)主要從事做網(wǎng)站、成都做網(wǎng)站、網(wǎng)頁(yè)設(shè)計(jì)、企業(yè)做網(wǎng)站、公司建網(wǎng)站等業(yè)務(wù)。立足成都服務(wù)屯昌,10余年網(wǎng)站建設(shè)經(jīng)驗(yàn),價(jià)格優(yōu)惠、服務(wù)專業(yè),歡迎來電咨詢建站服務(wù):18980820575首先頁(yè)面分析,打開東方財(cái)經(jīng)網(wǎng)股票列表頁(yè),
和百度股票詳情頁(yè) ,右鍵查看網(wǎng)頁(yè)源代碼,
網(wǎng)址后面的代碼就是股票代碼,所以打算先獲取股票代碼,然后獲取詳情,廢話少說,直接上代碼吧:
import re import requests from bs4 import BeautifulSoup #獲取html def getHtml(url): try: req=requests.get(url) req.raise_for_status() req.encoding=req.apparent_encoding return req.text except : print('getHtml失敗') #獲取股票代碼 def getStockList(lst,stockUrl): html=getHtml(stockUrl) soup=BeautifulSoup(html,'html.parser') a=soup.find_all('a') for i in a: try: href=i.attrs['href'] lst.append(re.findall(r'[s][hz]\d{6}',href)[0]) except: continue #獲取股票詳情 def getStockInfo(lst,stockUrl,fpath): count=0 for stock in lst: url=stockUrl+stock+'.html' html=getHtml(url) try: if html=='': continue infoDict={} soup=BeautifulSoup(html,'html.parser') stockInfo=soup.find('div',attrs={'class':'stock-bets'}) name=stockInfo.find_all(attrs={'class':'bets-name'})[0] infoDict.update({'股票名稱':name.text.split()[0]}) keyList=stockInfo.find_all('dt') valueList=stockInfo.find_all('dd') for i in range(len(keyList)): key=keyList[i].text val=valueList[i].text infoDict[key]=val with open(fpath,'a',encoding='utf-8') as f: f.write(str(infoDict)+'\n') count+=1 print('\r當(dāng)前速度:{:.2f}%'.format(count*100/len(lst)),end='') except: count+=1 print('\r當(dāng)前速度e:{:.2f}%'.format(count*100/len(lst)),end='') continue def main(): stockListUrl='http://quote.eastmoney.com/stocklist.html' stockInfotUrl='https://gupiao.baidu.com/stock/' outPutFile='D:\python\shuju\stockInfo.txt' slist=[] getStockList(slist,stockListUrl) getStockInfo(slist,stockInfotUrl,outPutFile) main()