BeautifulSoup怎么在Python中使用?很多新手對此不是很清楚,為了幫助大家解決這個難題,下面小編將為大家詳細講解,有這方面需求的人可以來學習下,希望你能有所收獲。
廣元網(wǎng)站制作公司哪家好,找創(chuàng)新互聯(lián)公司!從網(wǎng)頁設計、網(wǎng)站建設、微信開發(fā)、APP開發(fā)、自適應網(wǎng)站建設等網(wǎng)站項目制作,到程序開發(fā),運營維護。創(chuàng)新互聯(lián)公司自2013年創(chuàng)立以來到現(xiàn)在10年的時間,我們擁有了豐富的建站經(jīng)驗和運維經(jīng)驗,來保證我們的工作的順利進行。專注于網(wǎng)站建設就選創(chuàng)新互聯(lián)公司。python常用的庫:1.requesuts;2.scrapy;3.pillow;4.twisted;5.numpy;6.matplotlib;7.pygama;8.ipyhton等。
第一步,訪問網(wǎng)址并抓取源碼
# -*- coding: utf-8 -*- import urllib import urllib2 import re import os if __name__ == '__main__': # 訪問網(wǎng)址并抓取源碼 url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357' user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} try: request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) content = response.read() except urllib2.HTTPError as e: print e exit() except urllib2.URLError as e: print e exit() print content.decode('utf-8')
第二步,利用正則表達式提取信息
首先先觀察源碼中,你需要的內(nèi)容的位置以及如何識別
然后用正則表達式去識別讀取
注意正則表達式中的 . 是不能匹配\n的,所以需要設置一下匹配模式。
# -*- coding: utf-8 -*- import urllib import urllib2 import re import os if __name__ == '__main__': # 訪問網(wǎng)址并抓取源碼 url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357' user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} try: request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) content = response.read() except urllib2.HTTPError as e: print e exit() except urllib2.URLError as e: print e exit() regex = re.compile('.*?(.*?).*?', re.S) items = re.findall(regex, content) # 提取數(shù)據(jù) # 注意換行符,設置 . 能夠匹配換行符 for item in items: print item
第三步,修正數(shù)據(jù)并保存到文件中
# -*- coding: utf-8 -*- import urllib import urllib2 import re import os if __name__ == '__main__': # 訪問網(wǎng)址并抓取源碼 url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357' user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} try: request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) content = response.read() except urllib2.HTTPError as e: print e exit() except urllib2.URLError as e: print e exit() regex = re.compile('.*?(.*?).*?', re.S) items = re.findall(regex, content) # 提取數(shù)據(jù) # 注意換行符,設置 . 能夠匹配換行符 path = './qiubai' if not os.path.exists(path): os.makedirs(path) count = 1 for item in items: #整理數(shù)據(jù),去掉\n,將
換成\n item = item.replace('\n', '').replace('
', '\n') filepath = path + '/' + str(count) + '.txt' f = open(filepath, 'w') f.write(item) f.close() count += 1
第四步,將多個頁面下的內(nèi)容都抓取下來
# -*- coding: utf-8 -*- import urllib import urllib2 import re import os if __name__ == '__main__': # 訪問網(wǎng)址并抓取源碼 path = './qiubai' if not os.path.exists(path): os.makedirs(path) user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} regex = re.compile('.*?(.*?).*?', re.S) count = 1 for cnt in range(1, 35): print '第' + str(cnt) + '輪' url = 'http://www.qiushibaike.com/textnew/page/' + str(cnt) + '/?s=4941357' try: request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) content = response.read() except urllib2.HTTPError as e: print e exit() except urllib2.URLError as e: print e exit() # print content # 提取數(shù)據(jù) # 注意換行符,設置 . 能夠匹配換行符 items = re.findall(regex, content) # 保存信息 for item in items: # print item #整理數(shù)據(jù),去掉\n,將
換成\n item = item.replace('\n', '').replace('
', '\n') filepath = path + '/' + str(count) + '.txt' f = open(filepath, 'w') f.write(item) f.close() count += 1 print '完成'
使用BeautifulSoup對源碼進行解析
# -*- coding: utf-8 -*- import urllib import urllib2 import re import os from bs4 import BeautifulSoup if __name__ == '__main__': url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357' user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) # print response.read() soup_packetpage = BeautifulSoup(response, 'lxml') items = soup_packetpage.find_all("div", class_="content") for item in items: try: content = item.span.string except AttributeError as e: print e exit() if content: print content + "\n"
這是用BeautifulSoup去抓取書本以及其價格的代碼
可以通過對比得出到bs4對標簽的讀取以及標簽內(nèi)容的讀取
(因為我自己也沒有學到這一部分,目前只能依葫蘆畫瓢地寫)
# -*- coding: utf-8 -*- import urllib2 import urllib import re from bs4 import BeautifulSoup url = "https://www.packtpub.com/all" try: html = urllib2.urlopen(url) except urllib2.HTTPError as e: print e exit() soup_packtpage = BeautifulSoup(html, 'lxml') all_book_title = soup_packtpage.find_all("div", class_="book-block-title") price_regexp = re.compile(u"\s+\$\s\d+\.\d+") for book_title in all_book_title: try: print "Book's name is " + book_title.string.strip() except AttributeError as e: print e exit() book_price = book_title.find_next(text=price_regexp) try: print "Book's price is "+ book_price.strip() except AttributeError as e: print e exit() print ""
看完上述內(nèi)容是否對您有幫助呢?如果還想對相關知識有進一步的了解或閱讀更多相關文章,請關注創(chuàng)新互聯(lián)行業(yè)資訊頻道,感謝您對創(chuàng)新互聯(lián)的支持。