這期內(nèi)容當(dāng)中小編將會(huì)給大家?guī)?lái)有關(guān)Pytorch網(wǎng)絡(luò)結(jié)構(gòu)可視化,文章內(nèi)容豐富且以專業(yè)的角度為大家分析和敘述,閱讀完這篇文章希望大家可以有所收獲。
創(chuàng)新互聯(lián)建站專注于企業(yè)營(yíng)銷型網(wǎng)站建設(shè)、網(wǎng)站重做改版、張家川回族自治網(wǎng)站定制設(shè)計(jì)、自適應(yīng)品牌網(wǎng)站建設(shè)、html5、購(gòu)物商城網(wǎng)站建設(shè)、集團(tuán)公司官網(wǎng)建設(shè)、成都外貿(mào)網(wǎng)站制作、高端網(wǎng)站制作、響應(yīng)式網(wǎng)頁(yè)設(shè)計(jì)等建站業(yè)務(wù),價(jià)格優(yōu)惠性價(jià)比高,為張家川回族自治等各大城市提供網(wǎng)站開發(fā)制作服務(wù)。
Pytorch網(wǎng)絡(luò)結(jié)構(gòu)可視化:PyTorch是使用GPU和CPU優(yōu)化的深度學(xué)習(xí)張量庫(kù)。
安裝
可以通過以下的命令進(jìn)行安裝
conda install pytorch-nightly -c pytorch conda install graphviz conda install torchvision conda install tensorwatch
基于以下的版本:
torchvision.__version__ '0.2.1' torch.__version__ '1.2.0.dev20190610' sys.version '3.6.8 |Anaconda custom (64-bit)| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0]'
載入庫(kù)
import sys import torch import tensorwatch as tw import torchvision.models
網(wǎng)絡(luò)結(jié)構(gòu)可視化
alexnet_model = torchvision.models.alexnet() tw.draw_model(alexnet_model, [1, 3, 224, 224])
載入alexnet,draw_model函數(shù)需要傳入三個(gè)參數(shù),第一個(gè)為model,第二個(gè)參數(shù)為input_shape,第三個(gè)參數(shù)為orientation,可以選擇'LR'或者'TB',分別代表左右布局與上下布局。
在notebook中,執(zhí)行完上面的代碼會(huì)顯示如下的圖,將網(wǎng)絡(luò)的結(jié)構(gòu)及各個(gè)層的name和shape進(jìn)行了可視化。
統(tǒng)計(jì)網(wǎng)絡(luò)參數(shù)
可以通過model_stats方法統(tǒng)計(jì)各層的參數(shù)情況。
tw.model_stats(alexnet_model, [1, 3, 224, 224]) [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! [MAdd]: Dropout is not supported! [Flops]: Dropout is not supported! [Memory]: Dropout is not supported! alexnet_model.features Sequential( (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2)) (1): ReLU(inplace=True) (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2)) (4): ReLU(inplace=True) (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU(inplace=True) (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (9): ReLU(inplace=True) (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) ) alexnet_model.classifier Sequential( (0): Dropout(p=0.5) (1): Linear(in_features=9216, out_features=4096, bias=True) (2): ReLU(inplace=True) (3): Dropout(p=0.5) (4): Linear(in_features=4096, out_features=4096, bias=True) (5): ReLU(inplace=True) (6): Linear(in_features=4096, out_features=1000, bias=True) )
上述就是小編為大家分享的Pytorch網(wǎng)絡(luò)結(jié)構(gòu)可視化了,如果剛好有類似的疑惑,不妨參照上述分析進(jìn)行理解。如果想知道更多相關(guān)知識(shí),歡迎關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道。