1.tf.train.exponential_decay() 指數(shù)衰減學習率:
目前創(chuàng)新互聯(lián)已為上千的企業(yè)提供了網(wǎng)站建設(shè)、域名、虛擬空間、網(wǎng)站運營、企業(yè)網(wǎng)站設(shè)計、集寧網(wǎng)站維護等服務(wù),公司將堅持客戶導向、應(yīng)用為本的策略,正道將秉承"和諧、參與、激情"的文化,與客戶和合作伙伴齊心協(xié)力一起成長,共同發(fā)展。#tf.train.exponential_decay(learning_rate, global_steps, decay_steps, decay_rate, staircase=True/False): #指數(shù)衰減學習率 #learning_rate-學習率 #global_steps-訓練輪數(shù) #decay_steps-完整的使用一遍訓練數(shù)據(jù)所需的迭代輪數(shù);=總訓練樣本數(shù)/batch #decay_rate-衰減速度 #staircase-衰減方式;=True,那就表明每decay_steps次計算學習速率變化,更新原始學習速率;=alse,那就是每一步都更新學習速率。learning_rate = tf.train.exponential_decay( initial_learning_rate = 0.001 global_step = tf.Variable(0, trainable=False) decay_steps = 100 decay_rate = 0.95 total_loss = slim.losses.get_total_loss() learning_rate = tf.train.exponential_decay(initial_learning_rate, global_step, decay_steps, decay_rate, True, name='learning_rate') optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss, global_step)