TensorFlow 2.x: Complete Deep Learning Framework
Master TensorFlow from zero to hero: Keras APIs, custom training, tf.data, transfer learning, and production deployment with TF Serving, Lite, and JS.
Keras API
Sequential & Functional
GradientTape
Custom training
tf.data
Pipelines
Deployment
Serving, Lite, JS
Why TensorFlow? The Complete Ecosystem
TensorFlow is Google's end-to-end open-source platform for machine learning. Unlike research-focused frameworks, TensorFlow is built for production:
- Keras as high-level API (fast prototyping)
- TF Serving: deploy models with REST/gRPC
- TensorFlow Lite: on-device inference
- TensorFlow.js: ML in browser
- TensorBoard: visualization
- TFX: production pipelines
TensorFlow 2.x Milestones
- 2019 TF 2.0: eager execution by default
- 2020 Keras as core API
- 2021+ TF Serving, Lite, JS maturity
- Google production
TF vs PyTorch: Choose TensorFlow when...
- ✅ You need production deployment (serving, mobile, web)
- ✅ You want an all-in-one ecosystem (TFX, TF Data Validation)
- ✅ Scalable distributed training is a priority
- ✅ You prefer high-level Keras API for rapid prototyping
Keras Sequential API: Your First Neural Network
# Install TensorFlow
pip install tensorflow
import tensorflow as tf
# Load dataset
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Build model
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
# Compile
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train
model.fit(x_train, y_train, epochs=5)
# Evaluate
model.evaluate(x_test, y_test)
model.summary() to visualize layer shapes and parameter counts.
Functional API: Multi-Input, Multi-Output, Shared Layers
The Sequential API is linear. For complex graphs (ResNet, siamese networks), use Functional API.
Multi-Input Model
input1 = tf.keras.Input(shape=(64,))
input2 = tf.keras.Input(shape=(128,))
x1 = tf.keras.layers.Dense(32, activation='relu')(input1)
x2 = tf.keras.layers.Dense(32, activation='relu')(input2)
merged = tf.keras.layers.concatenate([x1, x2])
output = tf.keras.layers.Dense(1, activation='sigmoid')(merged)
model = tf.keras.Model(inputs=[input1, input2], outputs=output)
Shared Layers (Siamese)
shared_embed = tf.keras.layers.Dense(64, activation='relu')
input_a = tf.keras.Input(shape=(100,))
input_b = tf.keras.Input(shape=(100,))
out_a = shared_embed(input_a)
out_b = shared_embed(input_b)
model = tf.keras.Model(inputs=[input_a, input_b],
outputs=[out_a, out_b])
Custom Training Loops: Full Control with GradientTape
When you need custom loss, dynamic operations, or research flexibility — tf.GradientTape gives you fine-grained control.
# Model, optimizer, loss
model = tf.keras.Sequential([...])
optimizer = tf.keras.optimizers.Adam()
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
# Custom training step
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
predictions = model(x, training=True)
loss = loss_fn(y, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
# Loop
for epoch in range(epochs):
for batch_x, batch_y in dataset:
loss = train_step(batch_x, batch_y)
@tf.function to compile the training step into a graph — speeds up execution significantly.
tf.data: Build Efficient Input Pipelines
Don't leak memory. tf.data loads, preprocesses, and augments data on the fly with parallelization.
dataset = tf.data.Dataset.from_tensor_slices((X, y))
dataset = dataset.batch(32).shuffle(1000).prefetch(1)
dataset = tf.keras.preprocessing.image_dataset_from_directory(
'data/',
validation_split=0.2,
subset='training',
image_size=(224,224),
batch_size=32
)
Key transformations:
.map(preprocess_fn, num_parallel_calls=tf.data.AUTOTUNE) – parallel preprocessing.cache() – cache after first epoch.prefetch(1) – overlap preprocessing and training
Callbacks: ModelCheckpoint, EarlyStopping, TensorBoard
Intelligent training with automatic saving, stopping, and visualization.
callbacks = [
tf.keras.callbacks.ModelCheckpoint(
'best_model.h5', save_best_only=True, monitor='val_accuracy'),
tf.keras.callbacks.EarlyStopping(
patience=5, restore_best_weights=True, monitor='val_loss'),
tf.keras.callbacks.ReduceLROnPlateau(
factor=0.5, patience=3, min_lr=1e-7),
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(x_train, y_train,
validation_data=(x_val, y_val),
epochs=100,
callbacks=callbacks)
# Launch TensorBoard: tensorboard --logdir ./logs
Transfer Learning with TensorFlow Hub
Leverage state-of-the-art pre-trained models (ResNet, BERT, EfficientNet) in minutes.
import tensorflow_hub as hub
model = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/efficientnet/b0/classification/1",
input_shape=(224,224,3)),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(...)
preprocess = hub.KerasLayer(
"https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
encoder = hub.KerasLayer(
"https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4")
text_input = tf.keras.Input(shape=(), dtype=tf.string)
embeddings = encoder(preprocess(text_input))['pooled_output']
layer.trainable = False for the Hub layer to fine-tune only the head.
Save, Load, and Export Models
SavedModel format
# Save
model.save('my_model/')
# Load
loaded_model = tf.keras.models.load_model('my_model/')
Export to TF Lite
converter = tf.lite.TFLiteConverter.from_saved_model('my_model/')
tflite_model = converter.convert()
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
Production Deployment: TensorFlow Serving, Lite, JS
TensorFlow's killer feature: deploy anywhere.
TF Serving
REST/gRPC server for models.
docker run -p 8501:8501 \
--mount type=bind,source=/models/,target=/models \
tensorflow/serving
TF Lite
Android, iOS, Edge TPU.
interpreter = tf.lite.Interpreter('model.tflite')
interpreter.allocate_tensors()
TF.js
Run in browser / Node.js.
const model = await tf.loadLayersModel('model.json');
Visualize with TensorBoard
Track metrics, model graphs, histograms, and embeddings.
# Inside custom training loop
writer = tf.summary.create_file_writer('logs/fit/')
with writer.as_default():
tf.summary.scalar('loss', loss, step=epoch)
tf.summary.histogram('weights', model.weights[0], step=epoch)
TensorFlow Ecosystem: One Platform, All Stages
Research → Production → Edge → Web. TensorFlow provides a seamless path from experimentation to serving millions of users.