DeepSDFStruct.deep_sdf.training#

DeepSDF Model Training#

This module implements the complete training pipeline for DeepSDF neural networks. It provides loss functions, learning rate schedules, training loops, and checkpoint management.

Key Features#

Loss Functions
  • ClampedL1Loss: L1 loss with value clamping for stability

  • Support for custom loss functions

Learning Rate Schedules
  • ConstantLearningRateSchedule: Fixed learning rate

  • StepLearningRateSchedule: Step decay schedule

  • WarmupLearningRateSchedule: Warmup followed by decay

Training Loop
  • Multi-epoch training with validation

  • Automatic checkpointing and model saving

  • Loss tracking and visualization

  • Support for distributed training

  • Resume from checkpoint capability

Experiment Management
  • MLflow integration for experiment tracking

  • Automatic logging of hyperparameters

  • Training curve visualization

  • Model versioning

The training process follows the DeepSDF paper methodology with extensions for lattice structures and microstructured materials.

Examples

Train a DeepSDF model:

from DeepSDFStruct.deep_sdf.training import train_deep_sdf

specs = {
    'NetworkSpecs': {...},
    'TrainSpecs': {
        'NumEpochs': 2000,
        'LearningRateSchedule': {...}
    }
}

train_deep_sdf(experiment_dir, specs)

Functions

append_parameter_magnitudes(param_mag_log, model)

clip_logs(loss_log, lr_log, timing_log, ...)

create_interpolated_meshes_from_latent(...)

Interpolate between latent vectors and export reconstructed meshes.

get_learning_rate_schedules(specs)

get_mean_latent_vector_magnitude(latent_vectors)

get_spec_with_default(specs, key, default)

load_logs(experiment_directory)

reconstruct_meshs_from_latent(...[, ...])

save_latent_vectors(experiment_directory, ...)

save_logs(experiment_directory, loss_log, ...)

save_model(experiment_directory, filename, ...)

save_optimizer(experiment_directory, ...)

train_deep_sdf(experiment_directory, data_source)

Classes

class DeepSDFStruct.deep_sdf.training.ClampedL1Loss(clamp_val=0.1)#

Bases: torch.nn.modules.module.Module

forward(input, target)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class DeepSDFStruct.deep_sdf.training.ConstantLearningRateSchedule(value)#

Bases: DeepSDFStruct.deep_sdf.training.LearningRateSchedule

get_learning_rate(epoch)#
class DeepSDFStruct.deep_sdf.training.LearningRateSchedule#

Bases: object

get_learning_rate(epoch)#
class DeepSDFStruct.deep_sdf.training.StepLearningRateSchedule(initial, interval, factor)#

Bases: DeepSDFStruct.deep_sdf.training.LearningRateSchedule

get_learning_rate(epoch)#
class DeepSDFStruct.deep_sdf.training.WarmupLearningRateSchedule(initial, warmed_up, length)#

Bases: DeepSDFStruct.deep_sdf.training.LearningRateSchedule

get_learning_rate(epoch)#
DeepSDFStruct.deep_sdf.training.append_parameter_magnitudes(param_mag_log, model)#
DeepSDFStruct.deep_sdf.training.clip_logs(loss_log, lr_log, timing_log, lat_mag_log, param_mag_log, epoch)#
DeepSDFStruct.deep_sdf.training.create_interpolated_meshes_from_latent(experiment_directory, indices, steps, checkpoint='latest', max_batch=32, filetype='ply', device='cpu')#

Interpolate between latent vectors and export reconstructed meshes.

This function loads a trained DeepSDF model and its latent vectors, then interpolates between consecutive latent codes specified in indices. At each interpolation step, a 3D surface mesh is reconstructed and exported to disk in the requested format.

Parameters:
  • experiment_directory (str | PathLike) – Path to the experiment directory containing checkpoints and latent vectors.

  • checkpoint (str, optional) – Which checkpoint to load. Defaults to “latest”.

  • max_batch (int, optional) – Maximum batch size for inference. Defaults to 32.

  • filetype (str, optional) – File extension for exported meshes (e.g., “ply”, “obj”). Defaults to “ply”.

  • indices (list[int], optional) – Sequence of latent vector indices between which interpolation should be performed. Defaults to [1, 2, 3, 4, 5, 6, 7, 8].

  • steps (int, optional) – Number of interpolation steps (including endpoints). Defaults to 11.

Return type:

None

Example

>>> create_interpolated_meshes_from_latent(
...     experiment_directory="experiments/run1",
...     [1, 2, 3, 4, 5, 6, 7, 8],
...     11,
...     checkpoint="latest",
...     max_batch=32,
...     filetype="ply",
... )
DeepSDFStruct.deep_sdf.training.get_learning_rate_schedules(specs)#
DeepSDFStruct.deep_sdf.training.get_mean_latent_vector_magnitude(latent_vectors)#
DeepSDFStruct.deep_sdf.training.get_spec_with_default(specs, key, default)#
DeepSDFStruct.deep_sdf.training.load_logs(experiment_directory)#
DeepSDFStruct.deep_sdf.training.reconstruct_meshs_from_latent(experiment_directory, checkpoint='latest', max_batch=32, filetype='ply', device='cpu')#
DeepSDFStruct.deep_sdf.training.save_latent_vectors(experiment_directory, filename, latent_vec, epoch)#
DeepSDFStruct.deep_sdf.training.save_logs(experiment_directory, loss_log, lr_log, timing_log, lat_mag_log, param_mag_log, epoch)#
DeepSDFStruct.deep_sdf.training.save_model(experiment_directory, filename, decoder, epoch)#
DeepSDFStruct.deep_sdf.training.save_optimizer(experiment_directory, filename, optimizer, epoch)#
DeepSDFStruct.deep_sdf.training.train_deep_sdf(experiment_directory, data_source, continue_from=None, batch_split=1, device=None)#