本篇內容介紹了“如何用python爬蟲抓豆瓣電影數據”的有關知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠學有所成!
創(chuàng)新互聯公司-成都網站建設公司,專注成都網站建設、網站設計、網站營銷推廣,申請域名,網頁空間,網站托管、服務器托管有關企業(yè)網站制作方案、改版、費用等問題,請聯系創(chuàng)新互聯公司。
#生成代理池子,num為代理池容量
def proxypool(num):
n = 1
os.chdir(r'/Users/apple888/PycharmProjects/proxy IP')
fp = open('host.txt', 'r')
proxys = list()
ips = fp.readlines()
while n for p in ips:
ip = p.strip('\n').split('\t')
proxy = 'http:\\' + ip[0] + ':' + ip[1]
proxies = {'proxy': proxy}
proxys.append(proxies)
n+=1
return proxys
#爬豆瓣電影的代碼
def fetch_movies(tag, pages, proxys):
os.chdir(r'/Users/apple888/PycharmProjects/proxy IP/豆瓣電影')
url = 'https://movie.douban.com/tag/愛情?start={}'
headers = { 'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/'
'537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Mobile Safari/537.36'}
#用csv文件保存數據
csvFile = open("{}.csv".format(tag), 'a+', newline='', encoding='utf-8')
writer = csv.writer(csvFile)
writer.writerow(('name', 'score', 'peoples', 'date', 'nation', 'actor'))
for page in range(0, pages*(20+1), 20):
url = url.format(page)
try:
respones = requests.get(url, headers=headers, proxies=random.choice(proxys))
while respones.status_code!=200:
respones = requests.get(url, headers=headers, proxies=random.choice(proxys))
soup = BeautifulSoup(respones.text, 'lxml')
movies = soup.find_all(name='div', attrs={'class': 'pl2'})
for movie in movies:
movie = BeautifulSoup(str(movie), 'lxml')
movname = movie.find(name='a')
# 影片名
movname = movname.contents[0].replace(' ', '').strip('\n').strip('/').strip('\n')
movInfo = movie.find(name='p').contents[0].split('/')
# 上映日期
date = movInfo[0][0:10]
# 國家
nation = movInfo[0][11:-2]
actor_list = [act.strip(' ').replace('...', '')
for act in movInfo[1:-1]]
# 演員
actors = '\t'.join(actor_list)
# 評分
score = movie.find('span', {'class': 'rating_nums'}).string
# 評論人數
peopleNum = movie.find('span', {'class': 'pl'}).string[1:-4]
writer.writerow((movname, score, peopleNum, date, nation, actors))
except:
continue
print('共有{}頁,已爬{}頁'.format(pages, int((page/20))))
執(zhí)行上面函數的代碼
import timestart = time.time() proxyPool= proxypool(50) fetch_movies('爛片', 111, proxyPool)end = time.time() lastT = int(end-start) print('耗時{}s'.format(lastT))
“如何用python爬蟲抓豆瓣電影數據”的內容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關的知識可以關注創(chuàng)新互聯網站,小編將為大家輸出更多高質量的實用文章!