這篇文章將為大家詳細(xì)講解有關(guān)Python爬蟲中搜索文檔樹的方法,小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,希望大家閱讀完這篇文章后可以有所收獲。
濱城網(wǎng)站制作公司哪家好,找創(chuàng)新互聯(lián)!從網(wǎng)頁設(shè)計(jì)、網(wǎng)站建設(shè)、微信開發(fā)、APP開發(fā)、成都響應(yīng)式網(wǎng)站建設(shè)等網(wǎng)站項(xiàng)目制作,到程序開發(fā),運(yùn)營維護(hù)。創(chuàng)新互聯(lián)于2013年成立到現(xiàn)在10年的時(shí)間,我們擁有了豐富的建站經(jīng)驗(yàn)和運(yùn)維經(jīng)驗(yàn),來保證我們的工作的順利進(jìn)行。專注于網(wǎng)站建設(shè)就選創(chuàng)新互聯(lián)。
搜索文檔樹
1.find_all(name, attrs, recursive, text, **kwargs)
1)name參數(shù)
name參數(shù)可以查找所有名字為name的Tag,字符串對(duì)象會(huì)被自動(dòng)忽略掉。
a.傳字符串
最簡單的過濾器就是字符串,在搜索方法中傳入一個(gè)字符串參數(shù),Beautiful Soup會(huì)查找與字符串完整匹配所有的內(nèi)容,返回一個(gè)列表。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 創(chuàng)建 Beautiful Soup 對(duì)象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.find_all("b")) print(soup.find_all("a"))
運(yùn)行結(jié)果
[The Dormouse's story] [, Lacie, Tillie]
B.傳正則表達(dá)式
如果傳入正則表達(dá)式作為參數(shù),Beautiful Soup會(huì)通過正則表達(dá)式match()來匹配內(nèi)容。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup import re html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 創(chuàng)建 Beautiful Soup 對(duì)象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") for tag in soup.find_all(re.compile("^b")): print(tag.name)
運(yùn)行結(jié)果
body b
C.傳列表
如果傳入列表參數(shù),Beautiful Soup會(huì)將與列表中任一元素匹配的內(nèi)容以列表方式返回。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 創(chuàng)建 Beautiful Soup 對(duì)象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.find_all(['a', 'b']))
2)keyword參數(shù)
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 創(chuàng)建 Beautiful Soup 對(duì)象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.find_all(id="link1"))
運(yùn)行結(jié)果
[]
3)text參數(shù)
通過text參數(shù)可以搜索文檔中的字符串內(nèi)容,與name參數(shù)的可選值一樣,text參數(shù)接受字符串,正則表達(dá)式,列表。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup import re html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 創(chuàng)建 Beautiful Soup 對(duì)象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") # 字符串 print(soup.find_all(text = " Elsie ")) # 列表 print(soup.find_all(text = ["Tillie", " Elsie ", "Lacie"])) # 正則表達(dá)式 print(soup.find_all(text = re.compile("Dormouse")))
運(yùn)行結(jié)果
[' Elsie '] [' Elsie ', 'Lacie', 'Tillie'] ["The Dormouse's story", "The Dormouse's story"]
關(guān)于Python爬蟲中搜索文檔樹的方法就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,可以學(xué)到更多知識(shí)。如果覺得文章不錯(cuò),可以把它分享出去讓更多的人看到。