這篇文章主要講解了“python抖音爬蟲(chóng)采集數(shù)據(jù)的方法是什么”,文中的講解內(nèi)容簡(jiǎn)單清晰,易于學(xué)習(xí)與理解,下面請(qǐng)大家跟著小編的思路慢慢深入,一起來(lái)研究和學(xué)習(xí)“python抖音爬蟲(chóng)采集數(shù)據(jù)的方法是什么”吧!
廊坊網(wǎng)站制作公司哪家好,找創(chuàng)新互聯(lián)!從網(wǎng)頁(yè)設(shè)計(jì)、網(wǎng)站建設(shè)、微信開(kāi)發(fā)、APP開(kāi)發(fā)、響應(yīng)式網(wǎng)站等網(wǎng)站項(xiàng)目制作,到程序開(kāi)發(fā),運(yùn)營(yíng)維護(hù)。創(chuàng)新互聯(lián)于2013年創(chuàng)立到現(xiàn)在10年的時(shí)間,我們擁有了豐富的建站經(jīng)驗(yàn)和運(yùn)維經(jīng)驗(yàn),來(lái)保證我們的工作的順利進(jìn)行。專注于網(wǎng)站建設(shè)就選創(chuàng)新互聯(lián)。
爬蟲(chóng)就是我們利用某種程序代替人工批量讀取、獲取網(wǎng)站上的資料信息。而反爬則是跟爬蟲(chóng)的對(duì)立面,是竭盡全力阻止非人為的采集網(wǎng)站信息,二者相生相克,水火不容,到目前為止大部分的網(wǎng)站都還是可以輕易的爬取資料信息。
爬蟲(chóng)想要繞過(guò)被反的策略就是盡可能的讓服務(wù)器人你不是機(jī)器程序,所以在程序中就要把自己偽裝成瀏覽器訪問(wèn)網(wǎng)站,這可以極大程度降低被反的概率,那如何做到偽裝瀏覽器呢?
比如:
Accept:客戶端支持的數(shù)據(jù)類型,用逗號(hào)隔開(kāi),是有順序的,分號(hào)前面是主類型,分號(hào)后是子類型;
Accept-Encoding:指定瀏覽器可以支持的web服務(wù)器返回內(nèi)容壓縮編碼類型;
Accept-Language:瀏覽器可接受的自然語(yǔ)言的類型;
Connection:設(shè)置HTTP連接的持久化,通常都是Keep-Alive;
Host:服務(wù)器的域名或IP地址,如果不是通用端口,還包含該端口號(hào);
Referer:指當(dāng)前請(qǐng)求的URL是在什么地址引用的;
user_agent_list = [ "Opera/9.80 (X11; Linux i686; U; hu) Presto/2.9.168 Version/11.50", "Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11", "Opera/9.80 (X11; Linux i686; U; es-ES) Presto/2.8.131 Version/11.11", "Mozilla/5.0 (Windows NT 5.1; U; en; rv:1.8.1) Gecko/20061208 Firefox/5.0 Opera 11.11", "Opera/9.80 (X11; Linux x86_64; U; bg) Presto/2.8.131 Version/11.10", "Opera/9.80 (Windows NT 6.0; U; en) Presto/2.8.99 Version/11.10", "Opera/9.80 (Windows NT 5.1; U; zh-tw) Presto/2.8.131 Version/11.10", "Opera/9.80 (Windows NT 6.1; Opera Tablet/15165; U; en) Presto/2.8.149 Version/11.1", "Opera/9.80 (X11; Linux x86_64; U; Ubuntu/10.10 (maverick); pl) Presto/2.7.62 Version/11.01", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0", "Opera/9.80 (X11; Linux i686; Ubuntu/14.10) Presto/2.12.388 Version/12.16", "Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14", "Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/20100101 Firefox/4.0 Opera 12.14", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0) Opera 12.14", "Opera/12.80 (Windows NT 5.1; U; en) Presto/2.10.289 Version/12.02", "Opera/9.80 (Windows NT 6.1; U; es-ES) Presto/2.9.181 Version/12.00", "Opera/9.80 (Windows NT 5.1; U; zh-sg) Presto/2.9.181 Version/12.00", "Opera/12.0(Windows NT 5.2;U;en)Presto/22.9.168 Version/12.00", "Opera/12.0(Windows NT 5.1;U;en)Presto/22.9.168 Version/12.00", "Mozilla/5.0 (Windows NT 5.1) Gecko/20100101 Firefox/14.0 Opera/12.0", "Opera/9.80 (Windows NT 6.1; WOW64; U; pt) Presto/2.10.229 Version/11.62", "Opera/9.80 (Windows NT 6.0; U; pl) Presto/2.10.229 Version/11.62", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; de) Presto/2.9.168 Version/11.52", "Opera/9.80 (Windows NT 5.1; U; en) Presto/2.9.168 Version/11.51", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; de) Opera 11.51", "Opera/9.80 (X11; Linux x86_64; U; fr) Presto/2.9.168 Version/11.50", ] referer_list = ["https://www.test.com/", "https://www.baidu.com/"]
獲取隨機(jī)數(shù),即每次采集都會(huì)根據(jù)隨機(jī)數(shù)提取隨機(jī)用戶代理、引用地址(注:若有多個(gè)頁(yè)面循環(huán)采集,最好采集完單個(gè)等待個(gè)幾秒鐘再繼續(xù)采集,減小服務(wù)器的壓力。):
import random import re, urllib.request, lxml.html import requests import time, random def get_randam(data): return random.randint(0, len(data)-1) def crawl(): headers = { 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate', 'Accept-Language': 'zh-CN,zh;q=0.9', 'Connection': 'keep-alive', 'host': 'test.com', 'Referer': 'https://test.com/', } random_index = get_randam(user_agent_list) random_agent = user_agent_list[random_index] headers['User-Agent'] = random_agent random_index_01 = get_randam(referer_list) random_agent_01 = referer_list[random_index_01] headers['Referer'] = random_agent_01 session = requests.session() url = "https://www.test.com/" html_data = session.get(url, headers=headers, timeout=180) html_data.raise_for_status() html_data.encoding = 'utf-8-sig' data = html_data.text data_doc = lxml.html.document_fromstring(data) ...(對(duì)網(wǎng)頁(yè)數(shù)據(jù)進(jìn)行解析、提取、存儲(chǔ)等) time.sleep(random.randint(3, 5))
根據(jù)代理ip的匿名程度,代理ip可以分為下面四類:
透明代理(Transparent Proxy)Transparent Proxy):透明代理雖然可以直接“隱藏”你的IP地址,但是還是可以查到你是誰(shuí)。
匿名代理(Anonymous Proxy):匿名代理比透明代理進(jìn)步了一點(diǎn):別人只能知道你用了代理,無(wú)法知道你是誰(shuí)。
混淆代理(Distorting Proxies):與匿名代理相同,如果使用了混淆代理,別人還是能知道你在用代理,但是會(huì)得到一個(gè)假的IP地址,偽裝的更逼真
高匿代理(Elite proxy或High Anonymity Proxy):可以看出來(lái),高匿代理讓別人根本無(wú)法發(fā)現(xiàn)你是在用代理,所以是最好的選擇。
在使用的使用,毫無(wú)疑問(wèn)使用高匿代理效果最好
下面我采用免費(fèi)的高匿代理IP進(jìn)行采集:
#代理IP: https://www.xicidaili.com/nn import requests proxies = { "http": "http://117.30.113.248:9999", "https": "https://120.83.120.157:9999" } r=requests.get("https://www.baidu.com", proxies=proxies) r.raise_for_status() r.encoding = 'utf-8-sig' print(r.text)
感謝各位的閱讀,以上就是“python抖音爬蟲(chóng)采集數(shù)據(jù)的方法是什么”的內(nèi)容了,經(jīng)過(guò)本文的學(xué)習(xí)后,相信大家對(duì)python抖音爬蟲(chóng)采集數(shù)據(jù)的方法是什么這一問(wèn)題有了更深刻的體會(huì),具體使用情況還需要大家實(shí)踐驗(yàn)證。這里是創(chuàng)新互聯(lián),小編將為大家推送更多相關(guān)知識(shí)點(diǎn)的文章,歡迎關(guān)注!