Problem
You have a trained Keras model and want to save it and load it elsewhere.

Solution
Save the model as HDF5:

# Load libraries
import numpy as np
from keras.datasets import imdb
from keras.preprocessing.text import Tokenizer
from keras import models
from keras import layers
from keras.models import load_model

# Set random seed
np.random.seed(0)

# Set the number of features we want
number_of_features = 1000

# Load data and target vector from movie review data
(train_data, train_target), (test_data, test_target) = imdb.load_data(
 num_words=number_of_features)
 
# Convert movie review data to a one-hot encoded feature matrix
tokenizer = Tokenizer(num_words=number_of_features)
train_features = tokenizer.sequences_to_matrix(train_data, mode="binary")
test_features = tokenizer.sequences_to_matrix(test_data, mode="binary")

# Start neural network
network = models.Sequential()

# Add fully connected layer with a ReLU activation function
network.add(layers.Dense(units=16,
 activation="relu",
input_shape=(number_of_features,)))

# Add fully connected layer with a sigmoid activation function
network.add(layers.Dense(units=1, activation="sigmoid"))

# Compile neural network
network.compile(loss="binary_crossentropy", # Cross-entropy
 optimizer="rmsprop", # Root Mean Square Propagation
 metrics=["accuracy"]) # Accuracy performance metric
 
# Train neural network
history = network.fit(train_features, # Features

train_target, # Target vector
epochs=3, # Number of epochs
 verbose=0, # No output
 batch_size=100, # Number of observations per batch
 validation_data=(test_features, test_target)) # Test data
# Save neural network
network.save("model.h5")

Using TensorFlow backend.
We can then load the model either in another application or for additional training:

# Load neural network
network = load_model("model.h5")

Discussion
Unlike scikit-learn, Keras does not recommend you save models using pickle.
Instead, models are saved as an HDF5 file. The HDF5 file contains everything you
need to not only load the model to make predictions (i.e., architecture and trained
parameters), but also to restart training (i.e., loss and optimizer settings and the cur‐
rent state)

src:/ML Cookbook