logo
Loading...

[已解決]Day 41的sample code 中出現 ValueError,想請問原因跟解決方式? - Cupoy

在執行Day 41的sample code時,於下列程式碼第52行也就是 model.fit_gen...

[已解決]Day 41的sample code 中出現 ValueError,想請問原因跟解決方式?

2020/11/06 下午 11:01
電腦視覺深度學習討論版
林柏宇
觀看數:42
回答數:1
收藏數:0

在執行Day 41的sample code時,於下列程式碼第52行也就是 model.fit_generator 的 call_back 部份出現 >ValueError: Tensor conversion requested dtype float32_ref for Tensor with dtype float32: 的錯誤提醒,想請問為什麼及如何處理 ```python annotation_path = '2007_train.txt' # 轉換好格式的標註檔案 log_dir = 'logs/000/' # 訓練好的模型儲存的路徑 classes_path = 'model_data/voc_classes.txt' anchors_path = 'model_data/yolo_anchors.txt' class_names = get_classes(classes_path) num_classes = len(class_names) anchors = get_anchors(anchors_path) input_shape = (416,416) # multiple of 32, hw is_tiny_version = len(anchors)==6 # default setting if is_tiny_version: model = create_tiny_model(input_shape, anchors, num_classes, freeze_body=2, weights_path='model_data/tiny_yolo_weights.h5') else: model = create_model(input_shape, anchors, num_classes, freeze_body=2, weights_path='model_data/yolo_weights.h5') # make sure you know what you freeze logging = TensorBoard(log_dir=log_dir) checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5', monitor='val_loss', save_weights_only=True, save_best_only=True, period=3) reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1) early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1) # 分為 training 以及 validation val_split = 0.1 with open(annotation_path) as f: lines = f.readlines() np.random.seed(10101) np.random.shuffle(lines) np.random.seed(None) num_val = int(len(lines)*val_split) num_train = len(lines) - num_val # Train with frozen layers first, to get a stable loss. # Adjust num epochs to your dataset. This step is enough to obtain a not bad model. # 一開始先 freeze YOLO 除了 output layer 以外的 darknet53 backbone 來 train if True: model.compile(optimizer=Adam(lr=1e-3), loss={ # use custom yolo_loss Lambda layer. 'yolo_loss': lambda y_true, y_pred: y_pred}) batch_size = 16 print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) # 模型利用 generator 產生的資料做訓練,強烈建議大家去閱讀及理解 data_generator_wrapper 在 train.py 中的實現 model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes), steps_per_epoch=max(1, num_train//batch_size), validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes), validation_steps=max(1, num_val//batch_size), epochs=50, initial_epoch=0, callbacks=[logging, checkpoint]) model.save_weights(log_dir + 'trained_weights_stage_1.h5') # Unfreeze and continue training, to fine-tune. # Train longer if the result is not good. if True: # 把所有 layer 都改為 trainable for i in range(len(model.layers)): model.layers[i].trainable = True model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda y_true, y_pred: y_pred}) # recompile to apply the change print('Unfreeze all of the layers.') batch_size = 16 # note that more GPU memory is required after unfreezing the body print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes), steps_per_epoch=max(1, num_train//batch_size), validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes), validation_steps=max(1, num_val//batch_size), epochs=100, initial_epoch=50, callbacks=[logging, checkpoint, reduce_lr, early_stopping]) model.save_weights(log_dir + 'trained_weights_final.h5') ```

回答列表

  • 2020/11/09 上午 00:32
    張維元 (WeiYuan)
    贊同數:1
    不贊同數:0
    留言數:2

    嗨,這個是版本的問題,請檢查一下你的 Tensorflow 跟 keras 是否是對應的版本。


    可以查這裡:https://docs.floydhub.com/guides/environments/


    很高興可以在這次問答進行討論,如果還有不懂或是模糊的部分也歡迎持續追問。期待你的互動與鼓勵創造出不同更深度的討論。歡迎加入我自己經營的Line 群組社群,會有不定時舉辦的分享活動,一起來玩玩吧!