https://coinselected.com losses

Regression losses

MeanSquaredError class

 tf. keras . losses. MeanSquaredError ( decrease = `` car '',  diagnose = `` mean_squared_error '' )

Computes the mean of squares of errors between labels and predictions .
loss = square(y_true - y_pred)
Standalone usage :

 > > >  y_true  =  [ [ 0.,  1. ],  [ 0.,  0. ] ]
 > > >  y_pred  =  [ [ 1.,  1. ],  [ 1.,  0. ] ]
 > > >  # Using 'auto'/'sum_over_batch_size ' reduction type .
 > > >  mse  =  tf. keras. losses. MeanSquaredError ( )
 > > >  mse ( y_true,  y_pred ). numpy ( )
 0.5
 > > >  # Calling with 'sample_weight ' .
 > > >  mse ( y_true,  y_pred,  sample_weight = [ 0.7,  0.3 ] ). numpy ( )
 0.25
 > > >  # Using 'sum ' reduction type .
 > > >  mse  =  tf. keras. losses. MeanSquaredError (
 ...      reduction = tf. keras. losses. reduction. union )
 > > >  mse ( y_true,  y_pred ). numpy ( )
 1.0
 > > >  # Using 'none ' decrease type .
 > > >  mse  =  tf. keras. losses. MeanSquaredError (
 ...      decrease = tf. keras. losses. reduction. none )
 > > >  mse ( y_true,  y_pred ). numpy ( )
 array ( [ 0.5,  0.5 ],  dtype = float32 )

use with the compile() API :

 model. roll up ( optimizer = 'sgd ',  loss = tf. keras. losses. MeanSquaredError ( ) )

MeanAbsoluteError class

 tf. keras. losses. MeanAbsoluteError (
     decrease = `` car '',  name = `` mean_absolute_error ''
 )

Computes the mean of absolute difference between labels and predictions .
loss = abs(y_true - y_pred)
Standalone custom :

 > > >  y_true  =  [ [ 0.,  1. ],  [ 0.,  0. ] ]
 > > >  y_pred  =  [ [ 1.,  1. ],  [ 1.,  0. ] ]
 > > >  # Using 'auto'/'sum_over_batch_size ' reduction type .
 > > >  mae  =  tf. keras. losses. MeanAbsoluteError ( )
 > > >  mae ( y_true,  y_pred ). numpy ( )
 0.5
 > > >  # Calling with 'sample_weight ' .
 > > >  mae ( y_true,  y_pred,  sample_weight = [ 0.7,  0.3 ] ). numpy ( )
 0.25
 > > >  # Using 'sum ' reduction type .
 > > >  mae  =  tf. keras. losses. MeanAbsoluteError (
 ...      reduction = tf. keras. losses. reduction. sum )
 > > >  mae ( y_true,  y_pred ). numpy ( )
 1.0
 > > >  # Using 'none ' reduction character .
 > > >  mae  =  tf. keras. losses. MeanAbsoluteError (
 ...      reduction = tf. keras. losses. decrease. none )
 > > >  mae ( y_true,  y_pred ). numpy ( )
 array ( [ 0.5,  0.5 ],  dtype = float32 )

custom with the compile() API :

 model. compose ( optimizer = 'sgd ',  passing = tf. keras. losses. MeanAbsoluteError ( ) )

MeanAbsolutePercentageError class

 tf. keras. losses. MeanAbsolutePercentageError (
     reduction = `` car '',  name = `` mean_absolute_percentage_error ''
 )

Computes the mean absolute share error between y_true and y_pred .
rule :
loss = 100 * abs((y_true - y_pred) / y_true)
note that to avoid divide by zero, a small epsilon prize is added to the denominator .
Standalone usage :

 > > >  y_true  =  [ [ 2.,  1. ],  [ 2.,  3. ] ]
 > > >  y_pred  =  [ [ 1.,  1. ],  [ 1.,  0. ] ]
 > > >  # Using 'auto'/'sum_over_batch_size ' reduction character .
 > > >  mape  =  tf. keras. losses. MeanAbsolutePercentageError ( )
 > > >  mape ( y_true,  y_pred ). numpy ( )
 50 .
 > > >  # Calling with 'sample_weight ' .
 > > >  mape ( y_true,  y_pred,  sample_weight = [ 0.7,  0.3 ] ). numpy ( )
 20 .
 > > >  # Using 'sum ' reduction type .
 > > >  mape  =  tf. keras. losses. MeanAbsolutePercentageError (
 ...      reduction = tf. keras. losses. reduction. sum )
 > > >  mape ( y_true,  y_pred ). numpy ( )
 100 .
 > > >  # Using 'none ' decrease type .
 > > >  mape  =  tf. keras. losses. MeanAbsolutePercentageError (
 ...      reduction = tf. keras. losses. decrease. none )
 > > >  mape ( y_true,  y_pred ). numpy ( )
 array ( [ 25.,  75. ],  dtype = float32 )

usage with the compile() API :

 model. compile ( optimizer = 'sgd ' ,
               loss = tf. keras. losses. MeanAbsolutePercentageError ( ) )

MeanSquaredLogarithmicError class

 tf. keras. losses. MeanSquaredLogarithmicError (
     reduction = `` car '',  name = `` mean_squared_logarithmic_error ''
 )

Computes the average squared logarithmic error between y_true and y_pred .
loss = square(log(y_true + 1.) - log(y_pred + 1.))
Standalone use :

 > > >  y_true  =  [ [ 0.,  1. ],  [ 0.,  0. ] ]
 > > >  y_pred  =  [ [ 1.,  1. ],  [ 1.,  0. ] ]
 > > >  # Using 'auto'/'sum_over_batch_size ' decrease type .
 > > >  msle  =  tf. keras. losses. MeanSquaredLogarithmicError ( )
 > > >  msle ( y_true,  y_pred ). numpy ( )
 0.240
 > > >  # Calling with 'sample_weight ' .
 > > >  msle ( y_true,  y_pred,  sample_weight = [ 0.7,  0.3 ] ). numpy ( )
 0.120
 > > >  # Using 'sum ' reduction type .
 > > >  msle  =  tf. keras. losses. MeanSquaredLogarithmicError (
 ...      decrease = tf. keras. losses. reduction. union )
 > > >  msle ( y_true,  y_pred ). numpy ( )
 0.480
 > > >  # Using 'none ' decrease type .
 > > >  msle  =  tf. keras. losses. MeanSquaredLogarithmicError (
 ...      reduction = tf. keras. losses. decrease. none )
 > > >  msle ( y_true,  y_pred ). numpy ( )
 array ( [ 0.240,  0.240 ],  dtype = float32 )

use with the compile() API :

 model. compile ( optimizer = 'sgd ' ,
               loss = tf. keras. losses. MeanSquaredLogarithmicError ( ) )

CosineSimilarity class

 tf. keras. losses. CosineSimilarity (
     axis =- 1,  reduction = `` car '',  appoint = `` cosine_similarity ''
 )

Computes the cosine similarity between labels and predictions .
notice that it is a number between -1 and 1. When it is a negative count between -1 and 0, 0 indicates orthogonality and values closer to -1 argue greater similarity. The values closer to 1 argue greater dissimilarity. This makes it useable as a loss serve in a sic where you try to maximize the proximity between predictions and targets. If either y_true or y_pred is a zero vector, cosine similarity will be 0 regardless of the proximity between predictions and targets .
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))
Standalone custom :

 > > >  y_true  =  [ [ 0.,  1. ],  [ 1.,  1. ] ]
 > > >  y_pred  =  [ [ 1.,  0. ],  [ 1.,  1. ] ]
 > > >  # Using 'auto'/'sum_over_batch_size ' reduction type .
 > > >  cosine_loss  =  tf. keras. losses. CosineSimilarity ( axis = 1 )
 > > >  # l2_norm ( y_true ) = [ [ 0., 1. ], [ 1./1.414, 1./1.414 ] ]
 > > >  # l2_norm ( y_pred ) = [ [ 1., 0. ], [ 1./1.414, 1./1.414 ] ]
 > > >  # l2_norm ( y_true ). l2_norm ( y_pred ) = [ [ 0., 0. ], [ 0.5, 0.5 ] ]
 > > >  # personnel casualty = bastardly ( kernel ( l2_norm ( y_true ). l2_norm ( y_pred ), axis=1 ) )
 > > >  # = - ( ( 0. + 0. ) + ( 0.5 + 0.5 ) ) / 2
 > > >  cosine_loss ( y_true,  y_pred ). numpy ( )
 - 0.5
 > > >  # Calling with 'sample_weight ' .
 > > >  cosine_loss ( y_true,  y_pred,  sample_weight = [ 0.8,  0.2 ] ). numpy ( )
 - 0.0999
 > > >  # Using 'sum ' reduction type .
 > > >  cosine_loss  =  tf. keras. losses. CosineSimilarity ( axis = 1 ,
 ...      reduction = tf. keras. losses. decrease. sum )
 > > >  cosine_loss ( y_true,  y_pred ). numpy ( )
 - 0.999
 > > >  # Using 'none ' reduction type .
 > > >  cosine_loss  =  tf. keras. losses. CosineSimilarity ( axis = 1 ,
 ...      reduction = tf. keras. losses. decrease. none )
 > > >  cosine_loss ( y_true,  y_pred ). numpy ( )
 array ( [ - 0.,  - 0.999 ],  dtype = float32 )

custom with the compile() API :

 model. compile ( optimizer = 'sgd ',  loss = tf. keras. losses. CosineSimilarity ( axis = 1 ) )

Arguments

  • axis: The axis along which the cosine similarity is computed
    (the features axis). Defaults to -1.
  • reduction: Type of tf.keras.losses.Reduction to apply to loss.
    Default value is AUTO. AUTO indicates that the reduction option will
    be determined by the usage context. For almost all cases this defaults to
    SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of
    built-in training loops such as tf.keras compile and fit, using
    AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this
    custom training [tutorial]
    (https://www.tensorflow.org/tutorials/distribute/custom_training) for more
    details.
  • name: Optional name for the instance.

mean_squared_error function

 tf. keras. losses. mean_squared_error ( y_true,  y_pred )

Computes the hateful squared error between labels and predictions .
After computing the squared distance between the inputs, the average measure over the survive dimension is returned .
loss = mean(square(y_true - y_pred), axis=-1)
Standalone usage :

 > > >  y_true  =  nurse practitioner. random. randint ( 0,  2,  size = ( 2,  3 ) )
 > > >  y_pred  =  neptunium. random. random ( size = ( 2,  3 ) )
 > > >  loss  =  tf . keras. losses. mean_squared_error ( y_true,  y_pred )
 > > >  assert  loss. shape  ==  ( 2, )
 > > >  affirm  nurse practitioner. array_equal (
 ...      loss. numpy ( ),  neptunium. beggarly ( neptunium. square ( y_true  -  y_pred ),  axis =- 1 ) )

Arguments

  • y_true: Ground truth values. shape = [batch_size, d0, .. dN].
  • y_pred: The predicted values. shape = [batch_size, d0, .. dN].

Returns
Mean squared mistake values. determine = [batch_size, d0, .. dN-1] .

mean_absolute_error function

 tf. keras. losses. mean_absolute_error ( y_true,  y_pred )

Computes the beggarly absolute error between labels and predictions .
loss = mean(abs(y_true - y_pred), axis=-1)
Standalone usage :

 > > >  y_true  =  nurse practitioner. random. randint ( 0,  2,  size = ( 2,  3 ) )
 > > >  y_pred  =  neptunium. random. random ( size = ( 2,  3 ) )
 > > >  loss  =  tf. keras. losses. mean_absolute_error ( y_true,  y_pred )
 > > >  assert  loss. determine  ==  ( 2, )
 > > >  assert  nurse practitioner. array_equal (
 ...      loss. numpy ( ),  nurse practitioner. mean ( nurse practitioner. abs ( y_true  -  y_pred ),  axis =- 1 ) )

Arguments

  • y_true: Ground truth values. shape = [batch_size, d0, .. dN].
  • y_pred: The predicted values. shape = [batch_size, d0, .. dN].

Returns
Mean absolute error values. determine = [batch_size, d0, .. dN-1] .

mean_absolute_percentage_error function

 tf. keras. losses. mean_absolute_percentage_error ( y_true,  y_pred )

Computes the mean absolute percentage error between y_true and y_pred .
loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)
Standalone use :

 > > >  y_true  =  neptunium. random. random ( size = ( 2,  3 ) )
 > > >  y_true  =  neptunium. maximum ( y_true,  1e-7 )   # Prevent division by zero
 > > >  y_pred  =  nurse practitioner. random. random ( size = ( 2,  3 ) )
 > > >  passing  =  tf. keras. losses. mean_absolute_percentage_error ( y_true,  y_pred )
 > > >  assert  passing. form  ==  ( 2, )
 > > >  insist  nurse practitioner. array_equal (
 ...      loss. numpy ( ) ,
 ...      100.  *  neptunium. beggarly ( neptunium. abs ( ( y_true  -  y_pred )  /  y_true ),  axis =- 1 ) )

Arguments

  • y_true: Ground truth values. shape = [batch_size, d0, .. dN].
  • y_pred: The predicted values. shape = [batch_size, d0, .. dN].

Returns
Mean absolute percentage error values. form = [batch_size, d0, .. dN-1] .

mean_squared_logarithmic_error function

 tf. keras. losses. mean_squared_logarithmic_error ( y_true,  y_pred )

Computes the base squared logarithmic error between y_true and y_pred .
loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)
Standalone use :

 > > >  y_true  =  neptunium. random. randint ( 0,  2,  size = ( 2,  3 ) )
 > > >  y_pred  =  nurse practitioner. random. random ( size = ( 2,  3 ) )
 > > >  loss  =  tf. keras. losses. mean_squared_logarithmic_error ( y_true,  y_pred )
 > > >  affirm  loss. form  ==  ( 2, )
 > > >  y_true  =  nurse practitioner. utmost ( y_true,  1e-7 )
 > > >  y_pred  =  neptunium. maximum ( y_pred,  1e-7 )
 > > >  assert  neptunium. allclose (
 ...      loss. numpy ( ) ,
 ...      neptunium. mean (
 ...          nurse practitioner. hearty ( neptunium. log ( y_true  +  1. )  -  neptunium. logarithm ( y_pred  +  1. ) ),  axis =- 1 ) )

Arguments

  • y_true: Ground truth values. shape = [batch_size, d0, .. dN].
  • y_pred: The predicted values. shape = [batch_size, d0, .. dN].

Returns
Mean squared logarithmic mistake values. determine = [batch_size, d0, .. dN-1] .

cosine_similarity function

 tf. keras. losses. cosine_similarity ( y_true,  y_pred,  axis =- 1 )

Computes the cosine similarity between labels and predictions .
note that it is a act between -1 and 1. When it is a negative issue between -1 and 0, 0 indicates orthogonality and values closer to -1 argue greater similarity. The values closer to 1 indicate greater dissimilarity. This makes it useable as a loss officiate in a setting where you try to maximize the proximity between predictions and targets. If either y_true or y_pred is a zero vector, cosine similarity will be 0 careless of the proximity between predictions and targets .
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))
Standalone usage :

 > > >  y_true  =  [ [ 0.,  1. ],  [ 1.,  1. ],  [ 1.,  1. ] ]
 > > >  y_pred  =  [ [ 1.,  0. ],  [ 1.,  1. ],  [ - 1.,  - 1. ] ]
 > > >  loss  =  tf. keras. losses. cosine_similarity ( y_true,  y_pred,  bloc = 1 )
 > > >  loss. numpy ( )
 align ( [ - 0.,  - 0.999,  0.999 ],  dtype = float32 )

Arguments

  • y_true: Tensor of true targets.
  • y_pred: Tensor of predicted targets.
  • axis: Axis along which to determine similarity.

Returns
Cosine similarity tensor .

Huber class

 tf. keras. losses. Huber ( delta = 1.0,  reduction = `` car '',  name = `` huber_loss '' )

Computes the Huber loss between y_true and y_pred .
For each value x in error = y_true - y_pred :

loss = 0.5 * x^2                  if |x| <= d
loss = 0.5 * d^2 + d * (|x| - d)  if |x| > d

where vitamin d is delta. See : hypertext transfer protocol : //en.wikipedia.org/wiki/Huber_loss
Standalone usage :

 > > >  y_true  =  [ [ 0,  1 ],  [ 0,  0 ] ]
 > > >  y_pred  =  [ [ 0.6,  0.4 ],  [ 0.4,  0.6 ] ]
 > > >  # Using 'auto'/'sum_over_batch_size ' reduction type .
 > > >  h  =  tf. keras. losses. Huber ( )
 > > >  planck's constant ( y_true,  y_pred ). numpy ( )
 0.155
 > > >  # Calling with 'sample_weight ' .
 > > >  henry ( y_true,  y_pred,  sample_weight = [ 1,  0 ] ). numpy ( )
 0.09
 > > >  # Using 'sum ' decrease type .
 > > >  hydrogen  =  tf. keras. losses. Huber (
 ...      decrease = tf. keras. losses. decrease. total )
 > > >  hydrogen ( y_true,  y_pred ). numpy ( )
 0.31
 > > >  # Using 'none ' decrease type .
 > > >  h  =  tf. keras. losses. Huber (
 ...      reduction = tf. keras. losses. decrease. none )
 > > >  planck's constant ( y_true,  y_pred ). numpy ( )
 array ( [ 0.18,  0.13 ],  dtype = float32 )

usage with the compile() API :

 model. compose ( optimizer = 'sgd ',  loss = tf. keras. losses. Huber ( ) )

huber function

 tf. keras. losses. huber ( y_true,  y_pred,  delta = 1.0 )

Computes Huber loss measure .
For each respect x in error = y_true - y_pred :

loss = 0.5 * x^2                  if |x| <= d
loss = d * |x| - 0.5 * d^2        if |x| > d

where d is delta. See : hypertext transfer protocol : //en.wikipedia.org/wiki/Huber_loss
Arguments

  • y_true: tensor of true targets.
  • y_pred: tensor of predicted targets.
  • delta: A float, the point where the Huber loss function changes from a
    quadratic to linear.

Returns
tensor with one scalar loss entry per sample .

LogCosh class

 tf. keras. losses. LogCosh ( reduction = `` car '',  name = `` log_cosh '' )

Computes the logarithm of the hyperbolic cosine of the prediction error .
logcosh = log((exp(x) + exp(-x))/2), where ten is the error y_pred - y_true .
Standalone custom :

 > > >  y_true  =  [ [ 0.,  1. ],  [ 0.,  0. ] ]
 > > >  y_pred  =  [ [ 1.,  1. ],  [ 0.,  0. ] ]
 > > >  # Using 'auto'/'sum_over_batch_size ' reduction type .
 > > >  fifty  =  tf. keras. losses. LogCosh ( )
 > > >  lambert ( y_true,  y_pred ). numpy ( )
 0.108
 > > >  # Calling with 'sample_weight ' .
 > > >  fifty ( y_true,  y_pred,  sample_weight = [ 0.8,  0.2 ] ). numpy ( )
 0.087
 > > >  # Using 'sum ' reduction character .
 > > >  fifty  =  tf. keras. losses. LogCosh (
 ...      reduction = tf. keras. losses. reduction. kernel )
 > > >  l ( y_true,  y_pred ). numpy ( )
 0.217
 > > >  # Using 'none ' reduction type .
 > > >  l  =  tf. keras. losses. LogCosh (
 ...      decrease = tf. keras. losses. reduction. none )
 > > >  lambert ( y_true,  y_pred ). numpy ( )
 array ( [ 0.217,  0. ],  dtype = float32 )

usage with the compile() API :

 model. compose ( optimizer = 'sgd ',  loss = tf. keras. losses. LogCosh ( ) )

log_cosh function

 tf. keras. losses. log_cosh ( y_true,  y_pred )

Logarithm of the hyperbolic cosine of the prediction error .
log(cosh(x)) is approximately adequate to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that ‘logcosh ‘ works by and large like the beggarly squared error, but will not be so powerfully affected by the episodic wildly incorrect prediction .
Standalone usage :

 > > >  y_true  =  nurse practitioner. random. random ( size = ( 2,  3 ) )
 > > >  y_pred  =  neptunium. random. random ( size = ( 2,  3 ) )
 > > >  loss  =  tf. keras. losses. logcosh ( y_true,  y_pred )
 > > >  insist  passing. shape  ==  ( 2, )
 > > >  ten  =  y_pred  -  y_true
 > > >  assert  nurse practitioner. allclose (
 ...      loss. numpy ( ) ,
 ...      neptunium. average ( x  +  nurse practitioner. log ( neptunium. exp ( - 2.  *  ten )  +  1. )  -  tf. mathematics. log ( 2. ),  axis =- 1 ) ,
 ...      atol = 1e-5 ) 

Arguments

  • y_true: Ground truth values. shape = [batch_size, d0, .. dN].
  • y_pred: The predicted values. shape = [batch_size, d0, .. dN].

Returns
Logcosh error values. shape = [batch_size, d0, .. dN-1] .

source : https://coinselected.com
Category : coin 4u

Leave a Reply

Your email address will not be published.