Skip to content

TensorBoard

TensorBoard is a visualization tool from the TensorFlow ecosystem that also integrates seamlessly with PyTorch. It provides real-time visualization of various metrics during training and serves as the most fundamental monitoring tool in deep learning experiments.


Basic Usage

Installation

pip install tensorboard

PyTorch Integration

PyTorch natively supports TensorBoard through torch.utils.tensorboard.SummaryWriter:

from torch.utils.tensorboard import SummaryWriter

# 创建 writer,日志保存到 runs/ 目录
writer = SummaryWriter('runs/experiment_1')

for epoch in range(num_epochs):
    train_loss = train_one_epoch(model, train_loader, optimizer)
    val_loss, val_acc = evaluate(model, val_loader)

    # 记录标量
    writer.add_scalar('Loss/train', train_loss, epoch)
    writer.add_scalar('Loss/val', val_loss, epoch)
    writer.add_scalar('Accuracy/val', val_acc, epoch)
    writer.add_scalar('LR', optimizer.param_groups[0]['lr'], epoch)

writer.close()

Launching TensorBoard

# 基本启动
tensorboard --logdir=runs

# 指定端口
tensorboard --logdir=runs --port=6007

# 比较多个实验
tensorboard --logdir=runs  # runs/ 下的每个子目录自动成为一个实验

Then navigate to http://localhost:6006 in your browser.


Common Features

Logging Scalars

This is the most frequently used feature, allowing you to track metrics such as loss, accuracy, and learning rate as they evolve over training steps.

writer.add_scalar('tag', scalar_value, global_step)

# 同时记录多个标量到同一图表
writer.add_scalars('Loss', {
    'train': train_loss,
    'val': val_loss
}, epoch)

Logging Images

# 单张图像
writer.add_image('sample', img_tensor, epoch)

# 图像网格(如一个 batch 的样本)
from torchvision.utils import make_grid
grid = make_grid(images[:16], nrow=4, normalize=True)
writer.add_image('batch_samples', grid, epoch)

Logging Histograms

Histograms are useful for observing how weight and gradient distributions change over time, which helps diagnose vanishing or exploding gradient problems:

for name, param in model.named_parameters():
    writer.add_histogram(f'weights/{name}', param, epoch)
    if param.grad is not None:
        writer.add_histogram(f'grads/{name}', param.grad, epoch)

Logging Model Graphs

Visualize the computational graph structure of a model:

dummy_input = torch.randn(1, 3, 224, 224)
writer.add_graph(model, dummy_input)

Logging Hyperparameters (HParams)

Record the mapping between hyperparameters and final metrics to facilitate comparison across different configurations:

writer.add_hparams(
    hparam_dict={'lr': 1e-3, 'batch_size': 32, 'optimizer': 'adam'},
    metric_dict={'final_loss': 0.15, 'final_acc': 0.95}
)

Practical Tips

  • Adopt a consistent naming convention for experiments: Use descriptive directory names such as runs/resnet50_lr1e-3_bs64_aug.
  • Flush periodically: Call writer.flush() to ensure data is written to disk.
  • Remote access: Use SSH port forwarding to view TensorBoard from a remote server on your local machine: ssh -L 6006:localhost:6006 user@server.
  • Comparison with other tools: TensorBoard is well suited for straightforward experiment tracking. For team collaboration and large-scale experiment management, consider using Weights & Biases or MLflow instead.

References


评论 #