真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

【機(jī)器學(xué)習(xí)】數(shù)據(jù)準(zhǔn)備--python爬蟲

前言

我們?cè)趯W(xué)習(xí)機(jī)器學(xué)習(xí)相關(guān)內(nèi)容時(shí),一般是不需要我們自己去爬取數(shù)據(jù)的,因?yàn)楹芏嗟乃惴▽W(xué)習(xí)很友好的幫助我們打包好了相關(guān)數(shù)據(jù),但是這并不代表我們不需要進(jìn)行學(xué)習(xí)和了解相關(guān)知識(shí)。在這里我們了解三種數(shù)據(jù)的爬?。乎r花/明星圖像的爬取、中國藝人圖像的爬取、股票數(shù)據(jù)的爬取。分別對(duì)著三種爬蟲進(jìn)行學(xué)習(xí)和使用。

創(chuàng)新互聯(lián)專業(yè)為企業(yè)提供宿遷網(wǎng)站建設(shè)、宿遷做網(wǎng)站、宿遷網(wǎng)站設(shè)計(jì)、宿遷網(wǎng)站制作等企業(yè)網(wǎng)站建設(shè)、網(wǎng)頁設(shè)計(jì)與制作、宿遷企業(yè)網(wǎng)站模板建站服務(wù),10多年宿遷做網(wǎng)站經(jīng)驗(yàn),不只是建網(wǎng)站,更提供有價(jià)值的思路和整體網(wǎng)絡(luò)服務(wù)。

  • 體會(huì)
    個(gè)人感覺爬蟲的難點(diǎn)就是URL的獲取,URL的獲取與自身的經(jīng)驗(yàn)有關(guān),這點(diǎn)我也很難把握,一般URL獲取是通過訪問該網(wǎng)站通過抓包進(jìn)行分析獲取的。一般也不一定需要抓包工具,通過瀏覽器的開發(fā)者工具(F12/Fn+F12)即可進(jìn)行獲取。

鮮花/明星圖像爬取

URL獲取

  • 百度搜索鮮花關(guān)鍵詞,并打開開發(fā)者工具,點(diǎn)擊NrtWork
  • 找到數(shù)據(jù)包進(jìn)行分析,分析重要參數(shù)

    • pn 表示第幾張圖片加載
    • rn 表示加載多少圖片
  • 查看返回值進(jìn)行分析,可以看到圖片體制在ThumbURL中

下載過程

  • http://image.baidu.com/search/acjson? 百度圖片地址

  • 拼接tn 進(jìn)行訪問可以得到每個(gè)圖片的URL,在返回?cái)?shù)據(jù)的thumbURL中
    https://image.baidu.com/search/acjson?+tn

  • 進(jìn)行分離圖片的URL然后訪問下載

代碼

import requests
import os
import urllib

class GetImage():
    def __init__(self,keyword='鮮花',paginator=1):
        self.url = 'http://image.baidu.com/search/acjson?'

        self.headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36'
        }

        self.keyword = keyword
        self.paginator = paginator


    def get_param(self):

        keyword = urllib.parse.quote(self.keyword)
        params = []

        for i in range(1,self.paginator+1):
            params.append(
               'tn=resultjson_com&logid=&ipn=rj&ct=&is=&fp=result&fr=&word={}&queryWord={}&cl=2&lm=-1&ie=utf-8&oe=utf-8&adpicid=&st=&z=&ic=&hd=&latest=©right=&s=&se=&tab=&width=&height=&face=&istype=&qc=&nc=1&expermode=&nojc=&isAsync=&pn={}&rn=30&gsm=78&='.format(keyword,keyword,30*i)

            )
        return params
    def get_urls(self,params):
        urls = []
        for param in params:
            urls.append(self.url+param)
        return urls

    def get_image_url(self,urls):
        image_url = []
        for url in urls:
            json_data = requests.get(url,headers = self.headers).json()
            json_data = json_data.get('data')
            for i in json_data:
                if i:
                    image_url.append(i.get('thumbURL'))
        return image_url
    def get_image(self,image_url):
        ##根據(jù)圖片url,存入圖片
        file_name = os.path.join("", self.keyword)
        #print(file_name)
        if not os.path.exists(file_name):
            os.makedirs(file_name)

        for index,url in enumerate(image_url,start=1):
            with open(file_name+'/{}.jpg'.format(index),'wb') as f:
                f.write(requests.get(url,headers=self.headers).content)

            if index != 0 and index%30 == 0:
                print("第{}頁下載完成".format(index/30))


    def __call__(self, *args, **kwargs):
        params = self.get_param()
        urls = self.get_urls(params)
        image_url = self.get_image_url(urls)
        self.get_image(image_url=image_url)


if __name__ == '__main__':
    spider = GetImage('鮮花',3)
    spider()



明星圖像爬取

  • 只需要把main函數(shù)里的關(guān)鍵字換一下就可以了,換成明星即可

if __name__ == '__main__':
    spider = GetImage('明星',3)
    spider()

其他主題

  • 同理的我們需要其他圖片也可以換
if __name__ == '__main__':
    spider = GetImage('動(dòng)漫',3)
    spider()

藝人圖像爬取

方法一

  • 我們可以使用上面的爬取圖片的方式,把關(guān)鍵詞換為中國藝人也可以爬取圖片

方法二

  • 顯然上面的方式可以滿足我們部分需求,我們?nèi)绻枰廊〔煌嚾四敲瓷厦娴姆绞骄筒皇悄敲春昧恕?/li>
  • 我們下載10個(gè)不同藝人的圖片,然后用他們的名字命名圖片名,再把他們存入picture文件內(nèi)

代碼

import requests
import json
import os
import urllib

def getPicinfo(url):
    headers = {
       'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:101.0) Gecko/ Firefox/101.0',

    }
    response = requests.get(url,headers)

    if response.status_code == 200:
        return response.text
    return None


Download_dir = 'picture'
if os.path.exists(Download_dir) == False:
    os.mkdir(Download_dir)


pn_num = 1
rn_num = 10

for k in range(pn_num):
    url = "https://sp0.baidu.com/8aQDcjqpAAV3otqbppnN2DJv/api.php?resource_id=&from_mid=500&format=json&ie=utf-8&oe=utf-8&query=%E4%B8%AD%E5%9B%BD%E8%89%BA%E4%BA%BA&sort_key=&sort_type=1&stat0=&stat1=&stat2=&stat3=&pn="+str(pn_num)+"&rn="+str(rn_num)+"&_="
    res = getPicinfo(url)
    json_str = json.loads(res)
    figs = json_str['data'][0]['result']

    for i in figs:
        name = i['ename']
        img_url = i['pic_4n_78']
        img_res = requests.get(img_url)
        if img_res.status_code == 200:
            ext_str_splits = img_res.headers['Content-Type'].split('/')
            ext = ext_str_splits[-1]
            fname = name+'.'+ext
            open(os.path.join(Download_dir,fname),'wb').write(img_res.content)

            print(name,img_url,'saved')

股票數(shù)據(jù)爬取

我們對(duì)http://quote.eastmoney.com/center/gridlist.html 內(nèi)的股票數(shù)據(jù)進(jìn)行爬取,并且把數(shù)據(jù)儲(chǔ)存下來

爬取代碼

# http://quote.eastmoney.com/center/gridlist.html
import requests
from fake_useragent import UserAgent
import json
import csv
import  urllib.request as r
import threading

def getHtml(url):
    r = requests.get(url, headers={
        'User-Agent': UserAgent().random,
    })
    r.encoding = r.apparent_encoding
    return r.text


# 爬取多少
num = 20

stockUrl = 'http://52.push2.eastmoney.com/api/qt/clist/get?cb=jQuery_&pn=1&pz=20&po=1&np=1&ut=bd1d9ddb0cf9c27f6f&fltt=2&invt=2&wbp2u=|0|0|0|web&fid=f3&fs=m:0+t:80&fields=f1,f2,f3,f4,f5,f6,f7,f8,f9,f10,f12,f13,f14,f15,f16,f17,f18,f20,f21,f23,f24,f25,f22,f11,f62,f128,f136,f115,f152&_='


if __name__ == '__main__':
    responseText = getHtml(stockUrl)
    jsonText = responseText.split("(")[1].split(")")[0];
    resJson = json.loads(jsonText)
    datas = resJson['data']['diff']
    dataList = []
    for data in datas:

        row = [data['f12'],data['f14']]
        dataList.append(row)

    print(dataList)

    f = open('stock.csv', 'w+', encoding='utf-8', newline="")
    writer = csv.writer(f)
    writer.writerow(("代碼","名稱"))
    for data in dataList:
        writer.writerow((data[0]+"\t",data[1]+"\t"))
    f.close()


def getStockList():
    stockList = []
    f = open('stock.csv', 'r', encoding='utf-8')
    f.seek(0)
    reader = csv.reader(f)
    for item in reader:
        stockList.append(item)

    f.close()
    return stockList

def downloadFile(url,filepath):

    try:
        r.urlretrieve(url,filepath)
    except Exception as e:
        print(e)
    print(filepath,"is downLoaded")
    pass

sem = threading.Semaphore(1)

def dowmloadFileSem(url,filepath):
    with sem:
        downloadFile(url,filepath)

urlStart = 'http://quotes.money.163.com/service/chddata.html?code='
urlEnd = '&end=&fields=TCLOSW;HIGH;TOPEN;LCLOSE;CHG;PCHG;VOTURNOVER;VATURNOVER'

if __name__ == '__main__':
    stockList = getStockList()
    stockList.pop(0)
    print(stockList)


    for s in stockList:
        scode = str(s[0].split("\t")[0])

        url = urlStart+("0" if scode.startswith('6') else '1')+ scode + urlEnd

        print(url)
        filepath = (str(s[1].split("\t")[0])+"_"+scode)+".csv"
        threading.Thread(target=dowmloadFileSem,args=(url,filepath)).start()




數(shù)據(jù)處理代碼

有可能當(dāng)時(shí)爬取的數(shù)據(jù)是臟數(shù)據(jù),運(yùn)行下面代碼不一定能跑通,需要你自己處理數(shù)據(jù)還是其他方法

## 主要利用matplotlib進(jìn)行圖像繪制

import pandas as pd
import matplotlib.pyplot as plt
import csv
import 股票數(shù)據(jù)爬取 as gp

plt.rcParams['font.sans-serif'] = ['simhei'] #指定字體
plt.rcParams['axes.unicode_minus'] = False #顯示-號(hào)
plt.rcParams['figure.dpi'] = 100 #每英寸點(diǎn)數(shù)

files = []

def read_file(file_name):
    data = pd.read_csv(file_name,encoding='gbk')
    col_name = data.columns.values
    return data,col_name

def get_file_path():
    stock_list = gp.getStockList()
    paths = []
    for stock in stock_list[1:]:
        p = stock[1].strip()+"_"+stock[0].strip()+".csv"
        print(p)
        data,_=read_file(p)
        if len(data)>1:
            files.append(p)
            print(p)

get_file_path()
print(files)

def get_diff(file_name):
    data,col_name = read_file(file_name)
    index = len(data['日期'])-1
    sep = index//15
    plt.figure(figsize=(15,17))

    x = data['日期'].values.tolist()
    x.reverse()
    xticks = list(range(0,len(x),sep))
    xlabels = [x[i] for i in xticks]
    xticks.append(len(x))


    y1 = [float(c) if c!='None' else 0 for c in data['漲跌額'].values.tolist()]
    y2 = [float(c) if c != 'None' else 0 for c in data['漲跌幅'].values.tolist()]

    y1.reverse()
    y2.reverse()

    ax1 = plt.subplot(211)
    plt.plot(range(1,len(x)+1),y1,c='r')
    plt.title('{}-漲跌額/漲跌幅'.format(file_name.split('_')[0]),fontsize = 20)
    ax1.set_xticks(xticks)
    ax1.set_xticklabels(xlabels,rotation = 40)
    plt.ylabel('漲跌額')

    ax2 = plt.subplot(212)
    plt.plot(range(1, len(x) + 1), y1, c='g')
    #plt.title('{}-漲跌額/漲跌幅'.format(file_name.splir('_')[0]), fontsize=20)
    ax2.set_xticks(xticks)
    ax2.set_xticklabels(xlabels, rotation=40)
    plt.xlabel('日期')
    plt.ylabel('漲跌額')
    plt.show()


print(len(files))
for file in files:
    get_diff(file)

總結(jié)

上文描述了三個(gè)數(shù)據(jù)爬取的案例,不同的數(shù)據(jù)爬取需要我們對(duì)不同的URL進(jìn)行獲取,不同參數(shù)進(jìn)行輸入,URL如何組合、如何獲取、這是數(shù)據(jù)爬取的難點(diǎn),需要有一定的經(jīng)驗(yàn)和基礎(chǔ)。


本文標(biāo)題:【機(jī)器學(xué)習(xí)】數(shù)據(jù)準(zhǔn)備--python爬蟲
文章源于:http://weahome.cn/article/dsogjdd.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部