TensorFlow & Keras
20 Essential Q/A
DL Interview Prep
TensorFlow & Keras: 20 Interview Questions
Master TensorFlow 2.x and Keras: eager execution, tf.data, callbacks, functional API, custom training, distributed strategies, model deployment. Concise, interview-ready answers with code.
TensorFlow
Keras
tf.data
Eager Execution
Callbacks
Functional API
Distributed
1
What is the relationship between TensorFlow and Keras?
β‘ Easy
Answer: Keras is the high-level API for TensorFlow. Since TensorFlow 2.0,
tf.keras is the official, fully integrated implementation. It provides user-friendly model building, training loops, and deployment, while TensorFlow handles lower-level operations, distributed execution, and serving.
model = tf.keras.Sequential([...]) # High-level Keras API
2
What is eager execution in TensorFlow? Why was it introduced?
π Medium
Answer: Eager execution runs operations immediately (imperative), returning concrete values instead of building a static graph. It makes debugging intuitive, enables native Python control flow, and is the default in TF2.0+. No more
sess.run().
Debug with print, breakpoints
Slightly slower than graph (mitigated by @tf.function)
3
What does
@tf.function do? How does AutoGraph work?
π₯ Hard
Answer:
@tf.function converts Python code into a TensorFlow graph for performance. AutoGraph transforms Python control flow (if, while) into graph-compatible ops. First call builds the graph; subsequent calls are fast. Use for performance-critical sections.
@tf.function
def train_step(x):
with tf.GradientTape() as tape:
loss = compute_loss(x)
return tape.gradient(loss, model.trainable_variables)
4
Compare Sequential API, Functional API, and Model Subclassing in Keras.
π₯ Hard
Sequential: Simple linear stack, no branches, easy.
Functional: Non-linear topology (residual, multi-input/output), shared layers, compile-time graph.
Subclassing: Full flexibility, define forward pass imperatively, harder to serialize, use for research.
Functional: Non-linear topology (residual, multi-input/output), shared layers, compile-time graph.
Subclassing: Full flexibility, define forward pass imperatively, harder to serialize, use for research.
5
How do you build efficient data pipelines with
tf.data?
π Medium
Answer:
tf.data.Dataset pipelines: .from_tensor_slices, .map (parallel calls), .cache, .shuffle, .batch, .prefetch. Prefetch overlaps preprocessing and model execution. Use interleave for parallel reads.
dataset = tf.data.Dataset.from_tensor_slices((x,y))
dataset = dataset.shuffle(1000).batch(32).prefetch(tf.data.AUTOTUNE)
6
Name important Keras callbacks and their uses.
π Medium
Answer:
- ModelCheckpoint: save best model during training
- EarlyStopping: stop when metric plateaus
- ReduceLROnPlateau: reduce LR when stuck
- TensorBoard: log metrics, histograms, graphs
- CSVLogger: save epoch results
7
How do you write a custom training loop in TensorFlow 2.x?
π₯ Hard
Answer: Use
tf.GradientTape to record forward pass, compute loss, then apply gradients via optimizer. Loop over epochs/batches. Optionally decorate with @tf.function. More control than model.fit.
for epoch in range(epochs):
for x_batch, y_batch in dataset:
with tf.GradientTape() as tape:
preds = model(x_batch)
loss = loss_fn(y_batch, preds)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
8
What formats are available to save Keras models?
π Medium
Answer:
- SavedModel (default TF2): directory, platform-agnostic, supports custom objects.
- HDF5 (.h5): single file, legacy Keras format.
- Weights only (
.weights.h5): architecture needs separate loading. model.save_weights(),model.load_weights().
9
When do you need
persistent=True in tf.GradientTape?
π₯ Hard
Answer: By default, tape is released after one
gradient() call. Set persistent=True to compute multiple gradients (e.g., Jacobian, higher-order, multiple losses). Must delete tape manually to free memory.
10
What can you visualize with TensorBoard?
π Medium
Answer: Scalars (loss/accuracy curves), graphs (model architecture), histograms (weight distributions), images, text, embeddings (PCA/t-SNE), profiling (performance bottlenecks).
tf.keras.callbacks.TensorBoard logs automatically.
11
Explain TensorFlow distributed strategies.
π₯ Hard
Answer:
- MirroredStrategy: synchronous on single host, multiple GPUs.
- MultiWorkerMirroredStrategy: multiple hosts, each with GPUs.
- TPUStrategy: Google TPUs.
- ParameterServerStrategy: asynchronous, PS/worker architecture.
12
How do you add regularization to Keras models?
β‘ Easy
Answer:
kernel_regularizer=tf.keras.regularizers.l2(0.01)in layers.- Dropout layers (
tf.keras.layers.Dropout). - BatchNorm (also acts as regularizer).
- Data augmentation via
tf.keras.layers.RandomFlip, etc.
13
Steps for transfer learning using
tf.keras.applications?
π Medium
Answer:
1. Load base model with
2. Freeze layers:
3. Add new trainable top layers.
4. Train top layers; then optionally unfreeze and fine-tune.
weights='imagenet', include_top=False.2. Freeze layers:
base_model.trainable = False.3. Add new trainable top layers.
4. Train top layers; then optionally unfreeze and fine-tune.
14
How to create a custom Keras layer?
π₯ Hard
Answer: Subclass
tf.keras.layers.Layer. Define __init__, build(input_shape) to create weights, and call() for forward pass. Use add_weight() method.
class MyDense(tf.keras.layers.Layer):
def __init__(self, units):
super().__init__(); self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units))
def call(self, inputs):
return tf.matmul(inputs, self.w)
15
What is TFRecord? Why use it?
π Medium
Answer: TFRecord is TensorFlowβs binary storage format. Itβs efficient for large datasets, reduces IO overhead, and integrates with tf.data. Stores serialized
tf.train.Example protocol buffers. Ideal for distributed training on TPUs.
16
What are Keras preprocessing layers? Give examples.
π Medium
Answer: Built-in layers to add data preprocessing inside the model (portable). Examples:
Normalization, Rescaling, CategoryEncoding, StringLookup, Hashing. Can be adapted to dataset via adapt().
17
How do you enable mixed precision in TensorFlow?
π₯ Hard
Answer: Use
tf.keras.mixed_precision.set_global_policy('mixed_float16'). Model uses float16 for faster computation on GPUs with Tensor Cores, but loss scaling to prevent underflow. Keras handles automatically in model.fit; custom loops require optimizer loss scaling.
18
How do you debug NaN losses in TensorFlow?
π₯ Hard
Answer:
- Add
tf.debugging.check_numericson logits/loss. - Reduce learning rate, clip gradients.
- Check for invalid inputs (NaN/inf).
- Use
tf.keras.callbacks.TerminateOnNaN. - Validate custom layer computations.
19
What is TensorFlow Serving? How does it work with Keras?
π Medium
Answer: TensorFlow Serving is a system for production deployment of models. It loads SavedModel and exposes gRPC/REST endpoints. Version management, batching, hot reloading. Export:
model.save('export/', save_format='tf').
20
How to convert Keras model to TensorFlow Lite or TF.js?
π Medium
Answer:
TFLite: Use
TF.js: Use tensorflowjs_converter script or Python API:
tf.lite.TFLiteConverter.from_keras_model(model), convert, and write .tflite. Optionally apply quantization (post-training).TF.js: Use tensorflowjs_converter script or Python API:
tfjs.converters.save_keras_model(model, 'path').
TensorFlow & Keras β Interview Cheat Sheet
Core TF2
- β‘ Eager execution (default)
- @tf.function Graph optimization
- GradientTape Custom gradients
Keras
- Sequential Linear stacks
- Functional Graph topology
- Subclassing Full control
Performance
- tf.data .prefetch, AUTOTUNE
- Mixed precision float16
- Distributed MirroredStrategy
Save/Deploy
- SavedModel Production
- TFLite Mobile/Edge
- TF.js Browser
Verdict: "Keras for fast prototyping, TensorFlow for production and scalability. TF2 = eager + graphs via @tf.function."