本篇內(nèi)容介紹了“Python加js解密怎么爬取漫畫”的有關(guān)知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領大家學習一下如何處理這些情況吧!希望大家仔細閱讀,能夠?qū)W有所成!
晉寧網(wǎng)站建設公司創(chuàng)新互聯(lián)公司,晉寧網(wǎng)站設計制作,有大型網(wǎng)站制作公司豐富經(jīng)驗。已為晉寧超過千家提供企業(yè)網(wǎng)站建設服務。企業(yè)網(wǎng)站搭建\外貿(mào)網(wǎng)站建設要多少錢,請找那個售后服務好的晉寧做網(wǎng)站的公司定做!
通過審查元素我們可以發(fā)現(xiàn),所有的章節(jié)鏈接都是通過一個ol的標簽進行包裹,所以我們只要獲取要頁面中的一個ol標簽下,所有的a鏈接就可以成功的獲取到所有的章節(jié)鏈接了。
代碼如下:
#獲取漫畫的章節(jié)地址 def get_chapter_info(self): chapter_info = {} url = 'http://ac.qq.com/Comic/ComicInfo/id/{}'.format(self.comic_id) html_text = self.get_html(url) html = self.parser(html_text) # 找到所有章節(jié)列表 ol = html.find('ol')[0] chapters = ol.find('a') index = 0 for chapter in chapters: title = chapter.attrs['title'] link = parse.urljoin(TxComic.COMIC_HOST, chapter.attrs['href']) key = '第{}章'.format(index) chapter_info[key] = {'title': title, 'link': link} index += 1 return chapter_info
獲取到漫畫的所有章節(jié)后,我們可以通過請求每一章節(jié)的鏈接,來獲取頁面的具體信息,代碼如下:
# 請求漫畫每一章節(jié)的url鏈接 def get_comic_info(self): chapters = self.get_chapter_info() # print(chapters) for k, v in chapters.items(): url = v['link'] pics = self.get_chapter_pics(url) self.async_data(pics) # 分析數(shù)據(jù)并下載對應章節(jié)圖片 def async_data(self, res_data): book_name = res_data['comic']['title'] if not os.path.exists(book_name): os.mkdir(book_name) chapter_tname = "第" + str(res_data['chapter']['cid']) + '章__' + res_data['chapter']['cTitle'] chapter_name = eval(repr(chapter_tname).replace('/', '@')) # print(chapter_name) path = os.path.join(book_name, chapter_name) if not os.path.exists(path): os.mkdir(path) # print(res_data['picture']) for index, v in enumerate(res_data['picture']): name = os.path.join(path, "{}.png".format(index)) self.download_img(name, v['url']) print(chapter_name + "爬取完畢")
在此處我們可以成功的請求到每一個url鏈接,接下來我們只需要對返回的頁面進行js解密,然后取出_v下面的數(shù)據(jù)并下載就可以了。代碼如下:
# js解碼獲取章節(jié)信息 def get_chapter_pics(slef, url): while True: try: response = requests.get(url).text # 獲取W['DA' + 'TA'] data = re.findall("(?<=var DATA = ').*?(?=')", response)[0] nonce = re.findall('window\[".+?(?<=;)', response)[0] nonce = '='.join(nonce.split('=')[1:])[:-1] # 獲取W['n' + 'onc' + 'e'] nonce = execjs.eval(nonce) break except: pass # 模擬js運行,進行數(shù)據(jù)解碼 T = list(data) N = re.findall('\d+[a-zA-Z]+', nonce) jlen = len(N) while jlen: jlen -= 1 jlocate = int(re.findall('\d+', N[jlen])[0]) & 255 jstr = re.sub('\d+', '', N[jlen]) del T[jlocate:jlocate + len(jstr)] T = ''.join(T) keyStr = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=" a = [] e = 0 while e < len(T): b = keyStr.index(T[e]) e += 1 d = keyStr.index(T[e]) e += 1 f = keyStr.index(T[e]) e += 1 g = keyStr.index(T[e]) e += 1 b = b << 2 | d >> 4 d = (d & 15) << 4 | f >> 2 h = (f & 3) << 6 | g a.append(b) if 64 != f: a.append(d) if 64 != g: a.append(h) _v = json.loads(bytes(a)) return _v
代碼整合如下:
# js解碼獲取章節(jié)信息 def get_chapter_pics(slef, url): while True: try: response = requests.get(url).text # 獲取W['DA' + 'TA'] data = re.findall("(?<=var DATA = ').*?(?=')", response)[0] nonce = re.findall('window\[".+?(?<=;)', response)[0] nonce = '='.join(nonce.split('=')[1:])[:-1] # 獲取W['n' + 'onc' + 'e'] nonce = execjs.eval(nonce) break except: pass # 模擬js運行,進行數(shù)據(jù)解碼 T = list(data) N = re.findall('\d+[a-zA-Z]+', nonce) jlen = len(N) while jlen: jlen -= 1 jlocate = int(re.findall('\d+', N[jlen])[0]) & 255 jstr = re.sub('\d+', '', N[jlen]) del T[jlocate:jlocate + len(jstr)] T = ''.join(T) keyStr = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=" a = [] e = 0 while e < len(T): b = keyStr.index(T[e]) e += 1 d = keyStr.index(T[e]) e += 1 f = keyStr.index(T[e]) e += 1 g = keyStr.index(T[e]) e += 1 b = b << 2 | d >> 4 d = (d & 15) << 4 | f >> 2 h = (f & 3) << 6 | g a.append(b) if 64 != f: a.append(d) if 64 != g: a.append(h) _v = json.loads(bytes(a)) return _v
“Python加js解密怎么爬取漫畫”的內(nèi)容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關(guān)的知識可以關(guān)注創(chuàng)新互聯(lián)網(wǎng)站,小編將為大家輸出更多高質(zhì)量的實用文章!