真实的国产乱ⅩXXX66竹夫人,五月香六月婷婷激情综合,亚洲日本VA一区二区三区,亚洲精品一区二区三区麻豆

成都創(chuàng)新互聯(lián)網(wǎng)站制作重慶分公司

【Python】秀人集-寫真集-爬蟲-2.0

好久不見呀,各位。[/壞笑]

成都創(chuàng)新互聯(lián)公司是一家專注于成都網(wǎng)站設(shè)計(jì)、成都網(wǎng)站建設(shè)與策劃設(shè)計(jì),左貢網(wǎng)站建設(shè)哪家好?成都創(chuàng)新互聯(lián)公司做網(wǎng)站,專注于網(wǎng)站建設(shè)十載,網(wǎng)設(shè)計(jì)領(lǐng)域的專業(yè)建站公司;建站業(yè)務(wù)涵蓋:左貢等地區(qū)。左貢做網(wǎng)站價(jià)格咨詢:18982081108

自從上次發(fā)布文章已經(jīng)過去了許久,之前承諾過的2.0版本我就現(xiàn)在嘛出來吧。(畢竟,評(píng)論區(qū)都已經(jīng)開始催了,拖不了了...)

emm...具體的網(wǎng)頁鏈接我就不寫在正文了,我會(huì)放在代碼區(qū)的注釋部分。


閑話不多說,下面就是本次更新的代碼:

# 目標(biāo)網(wǎng)址:https://www.xiurenb.com

# 導(dǎo)入庫
import time, os, requests
from lxml import etree
from urllib import parse

# 定義請(qǐng)求頭

headers = {
	'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36 Edg/96.0.1054.62'
	}

# 格式化列表
img_list = []
url_list = []
page_list = []

# 編碼輸入數(shù)據(jù)
human_unencode = input('Enter the human_name:')
human_encode = parse.quote(human_unencode)

# 編碼后索引url
url_human = 'https://www.xiurenb.com/plus/search/index.asp?keyword=' + str(human_encode) + '&searchtype=title'

# 獲取指定人物寫真集列表頁數(shù)
res_first = requests.get(url=url_human, headers=headers)
tree_first = etree.HTML(res_first.text)
Num_first = len(tree_first.xpath('/html/body/div[3]/div[1]/div/div/ul/div[3]/div/div[2]/a'))
print(f'Page_total:{Num_first})

# 獲取指定頁數(shù)的每個(gè)寫真集的url并寫入列表
i = input('Enter the PageNumber:)
print(f'Getting the page-{i}...')
res_human = requests.get(url_human + '&p=' + str(i))
tree_human = etree.HTML(res_human.text)
jihe_human = tree_human.xpath('/html/body/div[3]/div[1]/div/div/ul/div[3]/div/div[1]/div/div[1]/h2/a/@href')
for page in jihe_human:
    page_list.append(page)
time.sleep(2)

# 獲取每個(gè)寫真集的全部圖片
for Page_Num in page_list:
	url = 'https://www.xiurenb.com' + str(Page_Num)
	Num_res = requests.get(url=url, headers=headers)
	Num_tree = etree.HTML(Num_res.text)
	Num = len(Num_tree.xpath('/html/body/div[3]/div/div/div[4]/div/div/a'))
	url_list.append(url)
	for i in range(1, int(Num) - 2):
		url_other = url[:-5] + '_' + str(i) +'.html'
		url_list.append(url_other)
	# 獲取所有圖片url
	for url_img in url_list:
		res = requests.get(url=url_img, headers=headers)
		tree = etree.HTML(res.text)
		img_src = tree.xpath('/html/body/div[3]/div/div/div[5]/p/img/@src')
		for img in img_src:
			img_list.append(img)
		time.sleep(0.5)
	# 創(chuàng)建保存目錄
	res = requests.get(url=url_list[0], headers=headers)
	res.encoding = 'utf-8'
	tree = etree.HTML(res.text)
	path_name = tree.xpath('/html/body/div[3]/div/div/div[1]/h1//text()')[0][11:]
	print(path_name)
	if not os.path.exists(f'C:/Users/liu/Pictures/{human_unencode}'):
		os.mkdir(f'C:/Users/liu/Pictures/{human_unencode}')
	the_path_name = f'C:/Users/liu/Pictures/{human_unencode}/' + path_name
	if not os.path.exists(the_path_name):
		os.mkdir(the_path_name)
		# 保存圖片數(shù)據(jù)
		num = 0
		for j in img_list:
			img_url = 'https://www.xiurenb.com' + j
			img_data = requests.get(url=img_url, headers=headers).content
			img_name = img_url.split('/')[-1]
			finish_num = str(num) + '/' + str(len(img_list))
			with open(f'C:/Users/liu/Pictures/{human_unencode}/' + path_name + '/' + img_name, 'wb') as f:
				print(f'Downloading the img:{img_name}/{finish_num}')
				f.write(img_data)
				f.close()
			num += 1
			time.sleep(0.5)
		# 再次格式化列表
		img_list = []
		url_list = []
	else:
		print('gone>>>')
		# 再次格式化列表
		img_list = []
		url_list = []

# 輸出結(jié)束提示
print('Finished!')

這次代碼比較長(zhǎng),我就不一一解釋了。這里需要注意的是,記得把保存路徑換成自己的,畢竟用戶名不同。

這個(gè)版本就是通過人名搜索寫真集,比如:唐安琪。運(yùn)行代碼時(shí)輸入想要搜索的內(nèi)容,中間再輸入想要下載的頁數(shù)就可以了。

如果有什么其他問題的話,可以評(píng)論區(qū)問我。

當(dāng)然,如果我解決不了的話我會(huì)去補(bǔ)課的[/痛哭],畢竟我學(xué)python也沒多久...


新聞標(biāo)題:【Python】秀人集-寫真集-爬蟲-2.0
文章起源:http://weahome.cn/article/dsogiig.html

其他資訊

在線咨詢

微信咨詢

電話咨詢

028-86922220(工作日)

18980820575(7×24)

提交需求

返回頂部