# How to Reshape Input Data for Long Short-Term Memory Networks in Keras

It can be difficult to understand how to prepare your succession data for remark to an LSTM model .

frequently there is confusion around how to define the input signal layer for the LSTM model.

There is besides confusion about how to convert your sequence data that may be a 1D or 2D matrix of numbers to the ask 3D format of the LSTM remark layer .

In this tutorial, you will discover how to define the input layer to LSTM models and how to reshape your load input data for LSTM models .

After completing this tutorial, you will know :

- How to define an LSTM input layer.
- How to reshape a one-dimensional sequence data for an LSTM model and define the input layer.
- How to reshape multiple parallel series data for an LSTM model and define the input layer.

**Kick-start your project** with my new book Long Short-Term Memory Networks With Python, including bit-by-bit tutorials and the Python source code files for all examples .

Let ’ s catch started .

## Tutorial Overview

This tutorial is divided into 4 parts ; they are :

- LSTM Input Layer
- Example of LSTM with Single Input Sample
- Example of LSTM with Multiple Input Features
- Tips for LSTM Input

### LSTM Input Layer

The LSTM input layer is specified by the “ input_shape ” argument on the first hide layer of the network .

This can make things confusing for beginners .

For example, below is an exemplar of a network with one concealed LSTM layer and one Dense output signal layer .

1 2 3 |
model
=
Sequential ( ) model . add ( LSTM ( 32 ) ) model . add ( Dense ( 1 ) ) |

In this exemplar, the LSTM ( ) layer must specify the shape of the input .

The input to every LSTM level must be cubic .

The three dimensions of this stimulation are :

**Samples**. One sequence is one sample. A batch is comprised of one or more samples.**Time Steps**. One time step is one point of observation in the sample.**Features**. One feature is one observation at a time step.

This means that the input signal layer expects a 3D array of data when fitting the model and when making predictions, even if specific dimensions of the range contain a single measure, e.g. one sample or one feature of speech .

When defining the remark level of your LSTM network, the network simulate you have 1 or more samples and requires that you specify the count of time steps and the number of features. You can do this by specifying a tuple to the “ input_shape ” argument .

For model, the model below defines an stimulation level that expects 1 or more samples, 50 clock steps, and 2 features .

1 2 3 |
model
=
Sequential ( ) model . add ( LSTM ( 32 ,
input_shape = ( 50 ,
2 ) ) ) model . add ( Dense ( 1 ) ) |

now that we know how to define an LSTM remark layer and the expectations of 3D inputs, let ’ s front at some examples of how we can prepare our data for the LSTM .

## Example of LSTM With Single Input Sample

Consider the case where you have one sequence of multiple fourth dimension steps and one feature .

For exercise, this could be a sequence of 10 values :

1 | 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 |

We can define this sequence of numbers as a NumPy align .

1 2 |
from numpy import array data
=
array ( [ 0.1 ,
0.2 ,
0.3 ,
0.4 ,
0.5 ,
0.6 ,
0.7 ,
0.8 ,
0.9 ,
1.0 ] ) |

We can then use the reshape ( ) function on the NumPy array to reshape this linear array into a three-dimensional range with 1 sample, 10 time steps, and 1 feature at each time dance step .

The reshape ( ) officiate when called on an align takes one argumentation which is a tuple defining the new shape of the array. We can not pass in any tuple of numbers ; the reshape must evenly reorganize the datum in the array .

1 |
data
=
data . reshape ( ( 1 ,
10 ,
1 ) ) |

once reshaped, we can print the new form of the array .

1 |
( data . shape ) |

Putting all of this together, the complete model is listed below .

1 2 3 4 |
from numpy import array data
=
array ( [ 0.1 ,
0.2 ,
0.3 ,
0.4 ,
0.5 ,
0.6 ,
0.7 ,
0.8 ,
0.9 ,
1.0 ] ) data
=
data . reshape ( ( 1 ,
10 ,
1 ) ) ( data . shape ) |

Running the example prints the modern 3D shape of the single sample .

1 | ( 1, 10, 1 ) |

This data is now quick to be used as input ( X ) to the LSTM with an input_shape of ( 10, 1 ) .

1 2 3 |
model
=
Sequential ( ) model . add ( LSTM ( 32 ,
input_shape = ( 10 ,
1 ) ) ) model . add ( Dense ( 1 ) ) |

## Example of LSTM with Multiple Input Features

Consider the shell where you have multiple parallel series as input for your model .

For exemplar, this could be two parallel serial of 10 values :

1 2 | series 1 : 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 series 2 : 1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1 |

We can define these data as a matrix of 2 columns with 10 rows :

1 2 3 4 5 6
7 8 9 10 11 12 |
from numpy import array data
=
array ( [
[ 0.1 ,
1.0 ] ,
[ 0.2 ,
0.9 ] ,
[ 0.3 ,
0.8 ] ,
[ 0.4 ,
0.7 ] ,
[ 0.5 ,
0.6 ] ,
[ 0.6 ,
0.5 ] ,
[ 0.7 ,
0.4 ] ,
[ 0.8 ,
0.3 ] ,
[ 0.9 ,
0.2 ] ,
[ 1.0 ,
0.1 ] ] ) |

This data can be framed as 1 sample with 10 time steps and 2 features .

It can be reshaped as a 3D array as follows :

1 |
data
=
data . reshape ( 1 ,
10 ,
2 ) |

Putting all of this together, the accomplished example is listed below .

1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
from numpy import array data
=
array ( [
[ 0.1 ,
1.0 ] ,
[ 0.2 ,
0.9 ] ,
[ 0.3 ,
0.8 ] ,
[ 0.4 ,
0.7 ] ,
[ 0.5 ,
0.6 ] ,
[ 0.6 ,
0.5 ] ,
[ 0.7 ,
0.4 ] ,
[ 0.8 ,
0.3 ] ,
[ 0.9 ,
0.2 ] ,
[ 1.0 ,
0.1 ] ] ) data
=
data . reshape ( 1 ,
10 ,
2 ) ( data . shape ) |

Running the exercise prints the raw 3D form of the single sample .

1 | ( 1, 10, 2 ) |

This datum is now ready to be used as stimulation ( X ) to the LSTM with an input_shape of ( 10, 2 ) .

1 2 3 |
model
=
Sequential ( ) model . add ( LSTM ( 32 ,
input_shape = ( 10 ,
2 ) ) ) model . add ( Dense ( 1 ) ) |

## Longer Worked Example

For a arrant end-to-end worked exemplar of preparing data, see this post :

## Tips for LSTM Input

This section lists some tips to help you when preparing your remark data for LSTMs .

- The LSTM input layer must be 3D.
- The meaning of the 3 input dimensions are: samples, time steps, and features.
- The LSTM input layer is defined by the input_shape argument on the first hidden layer.
- The input_shape argument takes a tuple of two values that define the number of time steps and features.
- The number of samples is assumed to be 1 or more.
- The reshape() function on NumPy arrays can be used to reshape your 1D or 2D data to be 3D.
- The reshape() function takes a tuple as an argument that defines the new shape.

## Further Reading

This section provides more resources on the subject if you are looking go abstruse .

## Summary

In this tutorial, you discovered how to define the input layer for LSTMs and how to reshape your sequence data for input to LSTMs .

specifically, you learned :

- How to define an LSTM input layer.
- How to reshape a one-dimensional sequence data for an LSTM model and define the input layer.
- How to reshape multiple parallel series data for an LSTM model and define the input layer.

Do you have any questions ?

Ask your questions in the comments below and I will do my best to answer .

## Develop LSTMs for Sequence Prediction Today!

#### Develop Your Own LSTM models in Minutes

… with fair a few lines of python code

Discover how in my modern Ebook :

Long Short-Term Memory Networks with Python

It provides **self-study tutorials** on topics like :

CNN LSTMs, Encoder-Decoder LSTMs, generative models, data formulation, making predictions and much more …

#### Finally Bring LSTM Recurrent Neural Networks to

Your Sequence Predictions Projects

Skip the Academics. Just Results .

See What ‘s Inside