Get keras middle layer output in sequential

At first, you have to create a feature extractor based on your desire end product layer. Your graph gets disconnected in here bneck.layers[12].output. Let ‘s say you have model A and model B. And you want some layer ‘s output ( let ‘s say 2 layers ) from model A and use them in model B to complete its architecture. To do that, you first gear create 2 feature extractor from model A as follows

extractor_one = Model(modelA.input, expected_layer_1.output)
extractor_two = Model(modelA.input, expected_layer_2.output)

here I will walk you through a childlike code exemplar. There can be a more flexible and smart approach to do this but hera is one of them. I will build a consecutive model and train it on CIFAR10 and next, I will try to build a functional model where I will utilize some of the consecutive exemplar layers ( just 2 of them ) and train the complete model on CIFAR100 .

import tensorflow as tf 

seq_model = tf.keras.Sequential(
    [
        tf.keras.Input(shape=(32, 32, 3)),
        tf.keras.layers.Conv2D(16, 3, activation="relu"),
        tf.keras.layers.Conv2D(32, 3, activation="relu"),
        tf.keras.layers.Conv2D(64, 3, activation="relu"),
        tf.keras.layers.Conv2D(128, 3, activation="relu"),
        tf.keras.layers.Conv2D(256, 3, activation="relu"),
        tf.keras.layers.GlobalAveragePooling2D(), 
        tf.keras.layers.Dense(10, activation='softmax')
     
    ]
)

seq_model.summary()

Trian on CIFAR10 data jell

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()

# train set / data 
x_train = x_train.astype('float32') / 255
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)

print(x_train.shape, y_train.shape)

seq_model.compile(
          loss      = tf.keras.losses.CategoricalCrossentropy(),
          metrics   = tf.keras.metrics.CategoricalAccuracy(),
          optimizer = tf.keras.optimizers.Adam())
# fit 
seq_model.fit(x_train, y_train, batch_size=128, epochs=5, verbose = 2)

# -------------------------------------------------------------------
(50000, 32, 32, 3) (50000, 10)
Epoch 1/5
27s 66ms/step - loss: 1.2229 - categorical_accuracy: 0.5647
Epoch 2/5
26s 67ms/step - loss: 1.1389 - categorical_accuracy: 0.5950
Epoch 3/5
26s 67ms/step - loss: 1.0890 - categorical_accuracy: 0.6127
Epoch 4/5
26s 67ms/step - loss: 1.0475 - categorical_accuracy: 0.6272
Epoch 5/5
26s 67ms/step - loss: 1.0176 - categorical_accuracy: 0.6409

now, let ‘ say we want some output from this consecutive model, let ‘s say of the succeed two layers.

tf.keras.layers.Conv2D(64, 3, activation="relu") # (None, 26, 26, 64)   
tf.keras.layers.Conv2D(256, 3, activation="relu") # (None, 22, 22, 256) 

To get them we first create two feature extractor from the consecutive model

last_layer_outputs = tf.keras.Model(seq_model.input, seq_model.layers[-3].output)
last_layer_outputs.summary() # (None, 22, 22, 256)  

mid_layer_outputs = tf.keras.Model(seq_model.input, seq_model.layers[2].output)
mid_layer_outputs.summary() # (None, 26, 26, 64)   

optionally, if we want to freeze them we can do that excessively now. Freezing because we choose the same type of data set here. ( CIFAR 10-100 ).

print('last layer output')
# just freezing first 2 layer 
for layer in last_layer_outputs.layers[:2]:
  layer.trainable = False

# checking 
for l in last_layer_outputs.layers:
    print(l.name, l.trainable)


print('\nmid layer output')
# freeze all layers
mid_layer_outputs.trainable = False

# checking 
for l in mid_layer_outputs.layers:
    print(l.name, l.trainable)

last layer output
input_11 False
conv2d_81 False
conv2d_82 False
conv2d_83 False
conv2d_84 True
conv2d_85 True

mid layer output
input_11 False
conv2d_81 False
conv2d_82 False
conv2d_83 False

now, let ‘s create a new mannequin with functional API and use the above two feature extractors .

encoder_input = tf.keras.Input(shape=(32, 32, 3), name="img")
x = tf.keras.layers.Conv2D(16, 3, activation="relu")(encoder_input)

last_x = last_layer_outputs(encoder_input)
print(last_x.shape) # (None, 22, 22, 256)

mid_x = mid_layer_outputs(encoder_input)
mid_x = tf.keras.layers.Conv2D(32, kernel_size=3, strides=1)(mid_x)
print(mid_x.shape) # (None, 24, 24, 32)

last_x = tf.keras.layers.GlobalMaxPooling2D()(last_x)
mid_x = tf.keras.layers.GlobalMaxPooling2D()(mid_x)
print(last_x.shape, mid_x.shape) # (None, 256) (None, 32)

encoder_output = tf.keras.layers.Concatenate()([last_x, mid_x])
print(encoder_output.shape) # (None, 288)

encoder_output = tf.keras.layers.Dense(100, activation='softmax')(encoder_output)
print(encoder_output.shape) # (None, 100)

encoder = tf.keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()

gearing on CIFAR100

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar100.load_data()

# train set / data 
x_train = x_train.astype('float32') / 255
y_train = tf.keras.utils.to_categorical(y_train, num_classes=100)

print(x_train.shape, y_train.shape)

encoder.compile(
          loss      = tf.keras.losses.CategoricalCrossentropy(),
          metrics   = tf.keras.metrics.CategoricalAccuracy(),
          optimizer = tf.keras.optimizers.Adam())
# fit 
encoder.fit(x_train, y_train, batch_size=128, epochs=5, verbose = 1)

reference : sport origin with a consecutive model

beginning : https://coinselected
Category : coin 4u

Leave a Reply

Your email address will not be published.