本篇內(nèi)容介紹了“怎么使用Python爬蟲”的有關(guān)知識(shí),在實(shí)際案例的操作過(guò)程中,不少人都會(huì)遇到這樣的困境,接下來(lái)就讓小編帶領(lǐng)大家學(xué)習(xí)一下如何處理這些情況吧!希望大家仔細(xì)閱讀,能夠?qū)W有所成!
成都創(chuàng)新互聯(lián)專注為客戶提供全方位的互聯(lián)網(wǎng)綜合服務(wù),包含不限于網(wǎng)站建設(shè)、成都網(wǎng)站設(shè)計(jì)、信宜網(wǎng)絡(luò)推廣、小程序設(shè)計(jì)、信宜網(wǎng)絡(luò)營(yíng)銷、信宜企業(yè)策劃、信宜品牌公關(guān)、搜索引擎seo、人物專訪、企業(yè)宣傳片、企業(yè)代運(yùn)營(yíng)等,從售前售中售后,我們都將竭誠(chéng)為您服務(wù),您的肯定,是我們最大的嘉獎(jiǎng);成都創(chuàng)新互聯(lián)為所有大學(xué)生創(chuàng)業(yè)者提供信宜建站搭建服務(wù),24小時(shí)服務(wù)熱線:13518219792,官方網(wǎng)址:www.cdcxhl.com
1.導(dǎo)入模塊
import re from bs4 import BeautifulSoup import requests import time import json import pandas as pd import numpy as np
2.狀態(tài)碼
r = requests.get('https://github.com/explore') r.status_code
3. 爬取*乎
#瀏覽器header和cookies headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36'} cookies = {'cookie':'_zap=3d979dbb-f25b-4014-8770-89045dec48f6; d_c0="APDvML4koQ-PTqFU56egNZNd2wd-eileT3E=|1561292196"; tst=r; _ga=GA1.2.910277933.1582789012; q_c1=9a429b07b08a4ae1afe0a99386626304|1584073146000|1561373910000; _xsrf=bf1c5edf-75bd-4512-8319-02c650b7ad2c; _gid=GA1.2.1983259099.1586575835; l_n_c=1; l_cap_id="NDIxM2M4OWY4N2YwNDRjM2E3ODAxMDdmYmY2NGFiMTQ=|1586663749|ceda775ba80ff485b63943e0baf9968684237435"; r_cap_id="OWY3OGQ1MDJhMjFjNDBiYzk0MDMxMmVlZDIwNzU0NzU=|1586663749|0948d23c731a8fa985614d3ed58edb6405303e99"; cap_id="M2I5NmJkMzRjMjc3NGZjNDhiNzBmNDMyNDQ3NDlmNmE=|1586663749|dacf440ab7ad64214a939974e539f9b86ddb9eac"; n_c=1; Hm_lvt_98beee57fd2ef70ccdd5ca52b9740c49=1586585625,1586587735,1586667228,1586667292; Hm_lpvt_98beee57fd2ef70ccdd5ca52b9740c49=1586667292; SESSIONID=GWBltmMTwz5oFeBTjRm4Akv8pFF6p8Y6qWkgUP4tjp6; JOID=UVkSBEJI6EKgHAipMkwAEWAkvEomDbkAwmJn4mY1kHHPVGfpYMxO3voUDK88UO62JqgwW5Up4hC2kX_KGO9xoKI=; osd=UlEXAU5L4EelEAuhN0kMEmghuUYlBbwFzmFv52M5k3nKUWvqaMlL0vkcCaowU-azI6QzU5As7hO-lHrGG-d0pa4=; capsion_ticket="2|1:0|10:1586667673|14:capsion_ticket|44:YTJkYmIyN2Q4YWI4NDI0Mzk0NjQ1YmIwYmUxZGYyNzY=|b49eb8176314b73e0ade9f19dae4b463fb970c8cbd1e6a07a6a0e535c0ab8ac3"; z_c0="2|1:0|10:1586667694|4:z_c0|92:Mi4xOGc1X0dnQUFBQUFBOE84d3ZpU2hEeVlBQUFCZ0FsVk5ydTVfWHdDazlHMVM1eFU5QjlqamJxWVhvZ2xuWlhTaVJ3|bcd3601ae34951fe72fd3ffa359bcb4acd60462715edcd1e6c4e99776f9543b3"; unlock_ticket="AMCRYboJGhEmAAAAYAJVTbankl4i-Y7Pzkta0e4momKdPG3NRc6GUQ=="; KLBRSID=fb3eda1aa35a9ed9f88f346a7a3ebe83|1586667697|1586660346'} start_url = 'https://www.zhihu.com/api/v3/feed/topstory/recommend?session_token=c03069ed8f250472b687fd1ee704dd5b&desktop=true&page_number=5&limit=6&action=pull&ad_interval=-1&before_id=23'
4. beautifulsoup解析
s = requests.Session() start_url = 'https://www.zhihu.com/' html = s.get(url = start_url, headers = headers,cookies = cookies,timeout = 5) soup = BeautifulSoup(html.content) question = [] ## 名稱 question_address = [] ## url temp1 = soup.find_all('div',class_='Card TopstoryItem TopstoryItem-isRecommend') for item in temp1: temp2 = item.find_all('div',itemprop="zhihu:question") # print(temp2) if temp2 != []: #### 存在專欄等情況,暫時(shí)跳過(guò) question_address.append(temp2[0].find('meta',itemprop='url').get('content')) question.append(temp2[0].find('meta',itemprop='name').get('content'))
5. 存儲(chǔ)信息
question_focus_number = [] #關(guān)注量 question_answer_number = [] # 回答量 for url in question_address: test = s.get(url = url,headers = headers,cookies = cookies,timeout = 5) soup = BeautifulSoup(test.content) info = soup.find_all('div',class_='QuestionPage')[0] # print(info) focus_number = info.find('meta',itemprop="answerCount").get('content') answer_number = info.find('meta',itemprop="zhihu:followerCount").get('content') question_focus_number.append(focus_number) question_answer_number.append(answer_number)
6. 整理信息并輸出
question_info = pd.DataFrame(list(zip(question,question_focus_number,question_answer_number)),columns = ['問(wèn)題名稱','關(guān)注人數(shù)','回答人數(shù)'] for item in ['關(guān)注人數(shù)','回答人數(shù)']: question_info[item] = np.array(question_info[item],dtype = 'int') question_info.sort_values(by='關(guān)注人數(shù)',ascending = False)
輸出:
“怎么使用Python爬蟲”的內(nèi)容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關(guān)的知識(shí)可以關(guān)注創(chuàng)新互聯(lián)網(wǎng)站,小編將為大家輸出更多高質(zhì)量的實(shí)用文章!