Model training APIs
Model training APIs
compile
method
model. roll up Reading: Model training APIs
(
optimizer = `` rmsprop '' ,
loss = none ,
metrics = none ,
loss_weights = none ,
weighted_metrics = none ,
run_eagerly = none ,
steps_per_execution = none ,
jit_compile = none ,
** kwargs
)
Configures the exemplar for train .
Example
model. roll up ( optimizer = tf. keras. optimizers. adam ( learning_rate = 1e-3 ) ,
loss = tf. keras. losses. BinaryCrossentropy ( ) ,
metrics = [ tf. keras. metrics. BinaryAccuracy ( ) ,
tf. keras. metrics. FalseNegatives ( ) ] )
Arguments
- optimizer: String (name of optimizer) or optimizer instance. See
tf.keras.optimizers
. - loss: Loss function. May be a string (name of loss function), or
atf.keras.losses.Loss
instance. Seetf.keras.losses
. A loss
function is any callable with the signatureloss = fn(y_true,
, where
y_pred)y_true
are the ground truth values, and
y_pred
are the model’s predictions.
y_true
should have shape
(batch_size, d0, .. dN)
(except in the case of
sparse loss functions such as
sparse categorical crossentropy which expects integer arrays of shape
(batch_size, d0, .. dN-1)
).
y_pred
should have shape(batch_size, d0, .. dN)
.
The loss function should return a float tensor.
If a customLoss
instance is
used and reduction is set toNone
, return value has shape
(batch_size, d0, .. dN-1)
i.e. per-sample or per-timestep loss
values; otherwise, it is a scalar. If the model has multiple outputs,
you can use a different loss on each output by passing a dictionary
or a list of losses. The loss value that will be minimized by the
model will then be the sum of all individual losses, unless
loss_weights
is specified. - metrics: List of metrics to be evaluated by the model during training
and testing. Each of this can be a string (name of a built-in
function), function or atf.keras.metrics.Metric
instance. See
tf.keras.metrics
. Typically you will usemetrics=['accuracy']
. A
function is any callable with the signatureresult = fn(y_true,
. To specify different metrics for different outputs of a
y_pred)
multi-output model, you could also pass a dictionary, such as
metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}
.
You can also pass a list to specify a metric or a list of metrics
for each output, such asmetrics=[['accuracy'], ['accuracy', 'mse']]
ormetrics=['accuracy', ['accuracy', 'mse']]
. When you pass the
strings ‘accuracy’ or ‘acc’, we convert this to one of
tf.keras.metrics.BinaryAccuracy
,
tf.keras.metrics.CategoricalAccuracy
,
tf.keras.metrics.SparseCategoricalAccuracy
based on the loss
function used and the model output shape. We do a similar
conversion for the strings ‘crossentropy’ and ‘ce’ as well. - loss_weights: Optional list or dictionary specifying scalar coefficients
(Python floats) to weight the loss contributions of different model
outputs. The loss value that will be minimized by the model will then
be the weighted sum of all individual losses, weighted by the
loss_weights
coefficients.
If a list, it is expected to have a 1:1 mapping to the model’s
outputs. If a dict, it is expected to map output names (strings)
to scalar coefficients. - weighted_metrics: List of metrics to be evaluated and weighted by
sample_weight
orclass_weight
during training and testing. - run_eagerly: Bool. Defaults to
False
. IfTrue
, thisModel
‘s
logic will not be wrapped in atf.function
. Recommended to leave
this asNone
unless yourModel
cannot be run inside a
tf.function
.run_eagerly=True
is not supported when using
tf.distribute.experimental.ParameterServerStrategy
. - steps_per_execution: Int. Defaults to 1. The number of batches to run
during eachtf.function
call. Running multiple batches inside a
singletf.function
call can greatly improve performance on TPUs or
small models with a large Python overhead. At most, one full epoch
will be run each execution. If a number larger than the size of the
epoch is passed, the execution will be truncated to the size of the
epoch. Note that ifsteps_per_execution
is set toN
,
Callback.on_batch_begin
andCallback.on_batch_end
methods will
only be called everyN
batches (i.e. before/after eachtf.function
execution). - jit_compile: If
True
, compile the model training step with XLA.
XLA is an optimizing compiler for
machine learning.
jit_compile
is not enabled for by default.
This option cannot be enabled withrun_eagerly=True
.
Note thatjit_compile=True
is
may not necessarily work for all models.
For more information on supported operations please refer to the
XLA documentation.
Also refer to
known XLA issues for
more details. - **kwargs: Arguments supported for backwards compatibility only.
fit
method
exemplary. match (
x = none ,
yttrium = none ,
batch_size = none ,
era = 1 ,
long-winded = `` car '' ,
callbacks = none ,
validation_split = 0.0 ,
validation_data = none ,
shamble = true ,
class_weight = none ,
sample_weight = none ,
initial_epoch = 0 ,
steps_per_epoch = none ,
validation_steps = none ,
validation_batch_size = none ,
validation_freq = 1 ,
max_queue_size = 10 ,
workers = 1 ,
use_multiprocessing = false ,
)
Trains the model for a fixed number of era ( iterations on a dataset ) .
Arguments
- x: Input data. It could be:
- A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors
(in case the model has multiple inputs). - A dict mapping input names to the corresponding array/tensors,
if the model has named inputs. - A
tf.data
dataset. Should return a tuple
of either(inputs, targets)
or
(inputs, targets, sample_weights)
. - A generator or
keras.utils.Sequence
returning(inputs, targets)
or(inputs, targets, sample_weights)
. - A
tf.keras.utils.experimental.DatasetCreator
, which wraps a
callable that takes a single argument of type
tf.distribute.InputContext
, and returns atf.data.Dataset
.
DatasetCreator
should be used when users prefer to specify the
per-replica batching and sharding logic for theDataset
.
Seetf.keras.utils.experimental.DatasetCreator
doc for more
information.
A more detailed description of unpacking behavior for iterator types
(Dataset, generator, Sequence) is given below. If using
tf.distribute.experimental.ParameterServerStrategy
, only
DatasetCreator
type is supported forx
.
- A Numpy array (or array-like), or a list of arrays
- y: Target data. Like the input data
x
,
it could be either Numpy array(s) or TensorFlow tensor(s).
It should be consistent withx
(you cannot have Numpy inputs and
tensor targets, or inversely). Ifx
is a dataset, generator,
orkeras.utils.Sequence
instance,y
should
not be specified (since targets will be obtained fromx
). - batch_size: Integer or
None
.
Number of samples per gradient update.
If unspecified,batch_size
will default to 32.
Do not specify thebatch_size
if your data is in the
form of datasets, generators, orkeras.utils.Sequence
instances
(since they generate batches). - epochs: Integer. Number of epochs to train the model.
An epoch is an iteration over the entirex
andy
data provided
(unless thesteps_per_epoch
flag is set to
something other than None).
Note that in conjunction withinitial_epoch
,
epochs
is to be understood as “final epoch”.
The model is not trained for a number of iterations
given byepochs
, but merely until the epoch
of indexepochs
is reached. - verbose: ‘auto’, 0, 1, or 2. Verbosity mode.
0 = silent, 1 = progress bar, 2 = one line per epoch.
‘auto’ defaults to 1 for most cases, but 2 when used with
ParameterServerStrategy
. Note that the progress bar is not
particularly useful when logged to a file, so verbose=2 is
recommended when not running interactively (eg, in a production
environment). - callbacks: List of
keras.callbacks.Callback
instances.
List of callbacks to apply during training.
Seetf.keras.callbacks
. Notetf.keras.callbacks.ProgbarLogger
andtf.keras.callbacks.History
callbacks are created automatically
and need not be passed intomodel.fit
.
tf.keras.callbacks.ProgbarLogger
is created or not based on
verbose
argument tomodel.fit
.
Callbacks with batch-level calls are currently unsupported with
tf.distribute.experimental.ParameterServerStrategy
, and users are
advised to implement epoch-level calls instead with an appropriate
steps_per_epoch
value. - validation_split: Float between 0 and 1.
Fraction of the training data to be used as validation data.
The model will set apart this fraction of the training data,
will not train on it, and will evaluate
the loss and any model metrics
on this data at the end of each epoch.
The validation data is selected from the last samples
in thex
andy
data provided, before shuffling. This argument is
not supported whenx
is a dataset, generator or
keras.utils.Sequence
instance.
If bothvalidation_data
andvalidation_split
are provided,
validation_data
will overridevalidation_split
.
validation_split
is not yet supported with
tf.distribute.experimental.ParameterServerStrategy
. - validation_data: Data on which to evaluate
the loss and any model metrics at the end of each epoch.
The model will not be trained on this data. Thus, note the fact
that the validation loss of data provided usingvalidation_split
orvalidation_data
is not affected by regularization layers like
noise and dropout.
validation_data
will overridevalidation_split
.
validation_data
could be:
– A tuple(x_val, y_val)
of Numpy arrays or tensors.
– A tuple(x_val, y_val, val_sample_weights)
of NumPy arrays.
– Atf.data.Dataset
.
– A Python generator orkeras.utils.Sequence
returning
(inputs, targets)
or(inputs, targets, sample_weights)
.
validation_data
is not yet supported with
tf.distribute.experimental.ParameterServerStrategy
. - shuffle: Boolean (whether to shuffle the training data
before each epoch) or str (for ‘batch’). This argument is ignored
whenx
is a generator or an object of tf.data.Dataset.
‘batch’ is a special option for dealing
with the limitations of HDF5 data; it shuffles in batch-sized
chunks. Has no effect whensteps_per_epoch
is notNone
. - class_weight: Optional dictionary mapping class indices (integers)
to a weight (float) value, used for weighting the loss function
(during training only).
This can be useful to tell the model to
“pay more attention” to samples from
an under-represented class. - sample_weight: Optional Numpy array of weights for
the training samples, used for weighting the loss function
(during training only). You can either pass a flat (1D)
Numpy array with the same length as the input samples
(1:1 mapping between weights and samples),
or in the case of temporal data,
you can pass a 2D array with shape
(samples, sequence_length)
,
to apply a different weight to every timestep of every sample. This
argument is not supported whenx
is a dataset, generator, or
keras.utils.Sequence
instance, instead provide the sample_weights
as the third element ofx
. - initial_epoch: Integer.
Epoch at which to start training
(useful for resuming a previous training run). - steps_per_epoch: Integer or
None
.
Total number of steps (batches of samples)
before declaring one epoch finished and starting the
next epoch. When training with input tensors such as
TensorFlow data tensors, the defaultNone
is equal to
the number of samples in your dataset divided by
the batch size, or 1 if that cannot be determined. If x is a
tf.data
dataset, and ‘steps_per_epoch’
is None, the epoch will run until the input dataset is exhausted.
When passing an infinitely repeating dataset, you must specify the
steps_per_epoch
argument. Ifsteps_per_epoch=-1
the training
will run indefinitely with an infinitely repeating dataset.
This argument is not supported with array inputs.
When usingtf.distribute.experimental.ParameterServerStrategy
:
*steps_per_epoch=None
is not supported. - validation_steps: Only relevant if
validation_data
is provided and
is atf.data
dataset. Total number of steps (batches of
samples) to draw before stopping when performing validation
at the end of every epoch. If ‘validation_steps’ is None, validation
will run until thevalidation_data
dataset is exhausted. In the
case of an infinitely repeated dataset, it will run into an
infinite loop. If ‘validation_steps’ is specified and only part of
the dataset will be consumed, the evaluation will start from the
beginning of the dataset at each epoch. This ensures that the same
validation samples are used every time. - validation_batch_size: Integer or
None
.
Number of samples per validation batch.
If unspecified, will default tobatch_size
.
Do not specify thevalidation_batch_size
if your data is in the
form of datasets, generators, orkeras.utils.Sequence
instances
(since they generate batches). - validation_freq: Only relevant if validation data is provided. Integer
orcollections.abc.Container
instance (e.g. list, tuple, etc.).
If an integer, specifies how many training epochs to run before a
new validation run is performed, e.g.validation_freq=2
runs
validation every 2 epochs. If a Container, specifies the epochs on
which to run validation, e.g.validation_freq=[1, 2, 10]
runs
validation at the end of the 1st, 2nd, and 10th epochs. - max_queue_size: Integer. Used for generator or
keras.utils.Sequence
input only. Maximum size for the generator queue.
If unspecified,max_queue_size
will default to 10. - workers: Integer. Used for generator or
keras.utils.Sequence
input
only. Maximum number of processes to spin up
when using process-based threading. If unspecified,workers
will default to 1. - use_multiprocessing: Boolean. Used for generator or
keras.utils.Sequence
input only. IfTrue
, use process-based
threading. If unspecified,use_multiprocessing
will default to
False
. Note that because this implementation relies on
multiprocessing, you should not pass non-picklable arguments to
the generator as they can’t be passed easily to children processes.
Unpacking behavior for iterator-like inputs : A park convention is to pass a tf.data.Dataset, generator, or tf.keras.utils.Sequence to the x
argument of meet, which will in fact give way not lone features ( x ) but optionally targets ( yttrium ) and sample weights. Keras requires that the output signal of such iterator-likes be unequivocal. The iterator should return a tuple of duration 1, 2, or 3, where the optional second and one-third elements will be used for y and sample_weight respectively. Any early character provided will be wrapped in a distance one tuple, efficaciously treating everything as ‘x ‘. When yielding dicts, they should inactive adhere to the top-level tuple social organization. e.g. ({"x0": x0, "x1": x1}, y)
. Keras will not attempt to separate features, targets, and weights from the keys of a one dict. A celebrated unsupported data type is the namedtuple. The reason is that it behaves like both an ordered datatype ( tuple ) and a map datatype ( dict ). so given a namedtuple of the form : namedtuple("example_tuple", ["y", "x"])
it is ambiguous whether to reverse the order of the elements when interpreting the value. even worse is a tuple of the form : namedtuple("other_tuple", ["x", "y", "z"])
where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x
. As a resultant role the data serve code will merely raise a ValueError if it encounters a namedtuple. ( Along with instructions to remedy the issue. )
Returns
A History
object. Its History.history
impute is a criminal record of training loss values and metrics values at consecutive epoch, a well as establishment loss values and establishment metrics values ( if applicable ) .
Raises
- RuntimeError: 1. If the model was never compiled or,
2. Ifmodel.fit
is wrapped intf.function
.
ValueError: In case of mismatch between the provided input data
and what the model expects or when the input data is empty.
evaluate
method
model. measure (
x = none ,
yttrium = none ,
batch_size = none ,
long-winded = `` car '' ,
sample_weight = none ,
steps = none ,
callbacks = none ,
max_queue_size = 10 ,
workers = 1 ,
use_multiprocessing = faithlessly ,
return_dict = false ,
** kwargs
)
Returns the loss measure & metrics values for the model in quiz modality .
calculation is done in batches ( see the batch_size
arg. )
Arguments
- x: Input data. It could be:
- A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors
(in case the model has multiple inputs). - A dict mapping input names to the corresponding array/tensors,
if the model has named inputs. - A
tf.data
dataset. Should return a tuple
of either(inputs, targets)
or
(inputs, targets, sample_weights)
. - A generator or
keras.utils.Sequence
returning(inputs, targets)
or(inputs, targets, sample_weights)
.
A more detailed description of unpacking behavior for iterator types
(Dataset, generator, Sequence) is given in theUnpacking behavior
section of
for iterator-like inputsModel.fit
.
- A Numpy array (or array-like), or a list of arrays
- y: Target data. Like the input data
x
, it could be either Numpy
array(s) or TensorFlow tensor(s). It should be consistent withx
(you cannot have Numpy inputs and tensor targets, or inversely). If
x
is a dataset, generator orkeras.utils.Sequence
instance,y
should not be specified (since targets will be obtained from the
iterator/dataset). - batch_size: Integer or
None
. Number of samples per batch of
computation. If unspecified,batch_size
will default to 32. Do not
specify thebatch_size
if your data is in the form of a dataset,
generators, orkeras.utils.Sequence
instances (since they generate
batches). - verbose:
"auto"
, 0, 1, or 2. Verbosity mode.
0 = silent, 1 = progress bar, 2 = single line.
"auto"
defaults to 1 for most cases, and to 2 when used with
ParameterServerStrategy
. Note that the progress bar is not
particularly useful when logged to a file, soverbose=2
is
recommended when not running interactively (e.g. in a production
environment). - sample_weight: Optional Numpy array of weights for the test samples,
used for weighting the loss function. You can either pass a flat (1D)
Numpy array with the same length as the input samples
(1:1 mapping between weights and samples), or in the case of
temporal data, you can pass a 2D array with shape(samples,
, to apply a different weight to every timestep
sequence_length)
of every sample. This argument is not supported whenx
is a
dataset, instead pass sample weights as the third element ofx
. - steps: Integer or
None
. Total number of steps (batches of samples)
before declaring the evaluation round finished. Ignored with the
default value ofNone
. If x is atf.data
dataset andsteps
is
None, ‘evaluate’ will run until the dataset is exhausted. This
argument is not supported with array inputs. - callbacks: List of
keras.callbacks.Callback
instances. List of
callbacks to apply during evaluation. See
callbacks. - max_queue_size: Integer. Used for generator or
keras.utils.Sequence
input only. Maximum size for the generator queue. If unspecified,
max_queue_size
will default to 10. - workers: Integer. Used for generator or
keras.utils.Sequence
input
only. Maximum number of processes to spin up when using process-based
threading. If unspecified,workers
will default to 1. - use_multiprocessing: Boolean. Used for generator or
keras.utils.Sequence
input only. IfTrue
, use process-based
threading. If unspecified,use_multiprocessing
will default to
False
. Note that because this implementation relies on
multiprocessing, you should not pass non-picklable arguments to the
generator as they can’t be passed easily to children processes. - return_dict: If
True
, loss and metric results are returned as a dict,
with each key being the name of the metric. IfFalse
, they are
returned as a list. - **kwargs: Unused at this time.
See the discussion of Unpacking behavior for iterator-like inputs
for Model.fit
.
Returns
scalar screen loss ( if the model has a individual output signal and no metrics ) or tilt of scalars ( if the mannequin has multiple outputs and/or metrics ). The impute model.metrics_names
will give you the display labels for the scalar outputs .
Raises
- RuntimeError: If
model.evaluate
is wrapped in atf.function
.
predict
method
exemplar. predict (
adam ,
batch_size = none ,
long-winded = `` car '' ,
steps = none ,
callbacks = none ,
max_queue_size = 10 ,
workers = 1 ,
use_multiprocessing = false ,
)
Generates end product predictions for the input samples .
calculation is done in batches. This method is designed for batch work of big numbers of inputs. It is not intended for use inside of loops that iterate over your data and serve small numbers of inputs at a time .
For small numbers of inputs that fit in one batch, immediately use __call__()
for faster murder, for example, model(x)
, or model(x, training=False)
if you have layers such as tf.keras.layers.BatchNormalization
that behave differently during inference. You may pair the individual model call with a tf.function
for extra performance inside your inner loop. If you need access to numpy array values alternatively of tensors after your model call, you can use tensor.numpy()
to get the numpy array respect of an tidal bore tensor .
besides, note the fact that examination personnel casualty is not affected by regularization layers like noise and dropout .
note : See this FAQ entry for more details about the deviation between Model
methods predict()
and __call__()
.
Arguments
- x: Input samples. It could be:
- A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors
(in case the model has multiple inputs). - A
tf.data
dataset. - A generator or
keras.utils.Sequence
instance.
A more detailed description of unpacking behavior for iterator types
(Dataset, generator, Sequence) is given in theUnpacking behavior
section of
for iterator-like inputsModel.fit
.
- A Numpy array (or array-like), or a list of arrays
- batch_size: Integer or
None
.
Number of samples per batch.
If unspecified,batch_size
will default to 32.
Do not specify thebatch_size
if your data is in the
form of dataset, generators, orkeras.utils.Sequence
instances
(since they generate batches). - verbose:
"auto"
, 0, 1, or 2. Verbosity mode.
0 = silent, 1 = progress bar, 2 = single line.
"auto"
defaults to 1 for most cases, and to 2 when used with
ParameterServerStrategy
. Note that the progress bar is not
particularly useful when logged to a file, soverbose=2
is
recommended when not running interactively (e.g. in a production
environment). - steps: Total number of steps (batches of samples)
before declaring the prediction round finished.
Ignored with the default value ofNone
. If x is atf.data
dataset andsteps
is None,predict()
will
run until the input dataset is exhausted. - callbacks: List of
keras.callbacks.Callback
instances.
List of callbacks to apply during prediction.
See callbacks. - max_queue_size: Integer. Used for generator or
keras.utils.Sequence
input only. Maximum size for the generator queue.
If unspecified,max_queue_size
will default to 10. - workers: Integer. Used for generator or
keras.utils.Sequence
input
only. Maximum number of processes to spin up when using
process-based threading. If unspecified,workers
will default
to 1. - use_multiprocessing: Boolean. Used for generator or
keras.utils.Sequence
input only. IfTrue
, use process-based
threading. If unspecified,use_multiprocessing
will default to
False
. Note that because this implementation relies on
multiprocessing, you should not pass non-picklable arguments to
the generator as they can’t be passed easily to children processes.
See the discussion of Unpacking behavior for iterator-like inputs
for Model.fit
. note that Model.predict uses the lapp rendition rules as Model.fit
and Model.evaluate
, then stimulation must be unambiguous for all three methods .
Returns
Numpy array ( mho ) of predictions .
Raises
- RuntimeError: If
model.predict
is wrapped in atf.function
. - ValueError: In case of mismatch between the provided
input data and the model’s expectations,
or in case a stateful model receives a number of samples
that is not a multiple of the batch size.
train_on_batch
method
model. train_on_batch (
ten ,
y = none ,
sample_weight = none ,
class_weight = none ,
reset_metrics = true ,
return_dict = fake ,
)
Runs a single gradient update on a single batch of data .
Arguments
- x: Input data. It could be:
- A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors
(in case the model has multiple inputs). - A dict mapping input names to the corresponding array/tensors,
if the model has named inputs.
- A Numpy array (or array-like), or a list of arrays
- y: Target data. Like the input data
x
, it could be either Numpy
array(s) or TensorFlow tensor(s). - sample_weight: Optional array of the same length as x, containing
weights to apply to the model’s loss for each sample. In the case of
temporal data, you can pass a 2D array with shape (samples,
sequence_length), to apply a different weight to every timestep of
every sample. - class_weight: Optional dictionary mapping class indices (integers) to a
weight (float) to apply to the model’s loss for the samples from this
class during training. This can be useful to tell the model to “pay
more attention” to samples from an under-represented class. - reset_metrics: If
True
, the metrics returned will be only for this
batch. IfFalse
, the metrics will be statefully accumulated across
batches. - return_dict: If
True
, loss and metric results are returned as a dict,
with each key being the name of the metric. IfFalse
, they are
returned as a list.
Returns
scalar educate passing ( if the model has a single output signal and no metrics ) or list of scalars ( if the model has multiple outputs and/or metrics ). The attribute model.metrics_names
will give you the display labels for the scalar outputs .
Raises
- RuntimeError: If
model.train_on_batch
is wrapped in atf.function
.
test_on_batch
method
model. test_on_batch (
x, yttrium = none, sample_weight = none, reset_metrics = true, return_dict = fake
)
Test the model on a single batch of samples .
Arguments
- x: Input data. It could be:
- A Numpy array (or array-like), or a list of arrays (in case the
model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has
multiple inputs). - A dict mapping input names to the corresponding array/tensors, if
the model has named inputs.
- A Numpy array (or array-like), or a list of arrays (in case the
- y: Target data. Like the input data
x
, it could be either Numpy
array(s) or TensorFlow tensor(s). It should be consistent withx
(you cannot have Numpy inputs and tensor targets, or inversely). - sample_weight: Optional array of the same length as x, containing
weights to apply to the model’s loss for each sample. In the case of
temporal data, you can pass a 2D array with shape (samples,
sequence_length), to apply a different weight to every timestep of
every sample. - reset_metrics: If
True
, the metrics returned will be only for this
batch. IfFalse
, the metrics will be statefully accumulated across
batches. - return_dict: If
True
, loss and metric results are returned as a dict,
with each key being the name of the metric. IfFalse
, they are
returned as a list.
Returns
scalar test loss ( if the model has a individual output and no metrics ) or list of scalars ( if the model has multiple outputs and/or metrics ). The attribute model.metrics_names
will give you the display labels for the scalar outputs .
Raises
- RuntimeError: If
model.test_on_batch
is wrapped in atf.function
.
predict_on_batch
method
model. predict_on_batch ( x )
Returns predictions for a one batch of samples .
Arguments
- x: Input data. It could be:
- A Numpy array (or array-like), or a list of arrays (in case the
model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has
multiple inputs).
- A Numpy array (or array-like), or a list of arrays (in case the
Returns
Numpy array ( south ) of predictions .
Raises
- RuntimeError: If
model.predict_on_batch
is wrapped in atf.function
.
run_eagerly
property
tf. keras. model. run_eagerly
Settable property indicating whether the model should run eagerly .
Running eagerly means that your model will be run step by step, like Python code. Your exemplary might run slower, but it should become easier for you to debug it by stepping into individual layer calls.
By default, we will attempt to compile your exemplar to a static graph to deliver the best performance performance .
Returns
Boolean, whether the model should run eagerly .