The framework supports autoencoder and autodecoder (AE) layers between the client and server model. These can be used to compress the gradients that are send over the network, thus reducing the amount of bytes sent and therefore preserving battery. The AE has to be trained separately and can be hooked into the framework by modifying your existing model_provider
instance. You provide a special ModelProvider
implementation that is capable of loading AE models from the filesystem and attaching the layers before and after the server/client models automatically. During normal model training, the AE layers are frozen.
You can use the AE feature by following these steps:
-
Train an AE model on your data and save the final torch files.
-
Create a new file under
configs/model_provider
that defines a new model provider that loads the AE model and your normal model._target_: edml.models.provider.autoencoder.AutoencoderModelProvider model_provider: _target_: edml.models.provider.cut_layer.CutLayerModelProvider model: _target_: edml.models.resnet_models.ResNet block: _target_: hydra.utils.get_class path: edml.models.resnet_models.BasicBlock num_blocks: [ 3, 3, 3 ] num_classes: 100 cut_layer: 4 decoder: _target_: edml.models.provider.path.SerializedModel model: _target_: edml.models.partials.resnet.Decoder path: resnet_decoder.pth encoder: _target_: edml.models.provider.path.SerializedModel model: _target_: edml.models.partials.resnet.Encoder path: resnet_encoder.pth
Builtin model providers
We provide two model provider configurations that also have support for the AE feature for Resnet20 and Resnet110. You can use them by setting the model_provider
configuration key to either resnet20-with-autoencoder
or resnet110-with-autoencoder
. This imports the values under edml/config/model_provider/<key value>
, respectively.