How Do I Implement Transfer Learning In Niftynet?
I'd like to perform some transfer learning using the NiftyNet stack, as my dataset of labeled images is rather small. In TensorFlow, this is possible--I can load a variety of pre-
Solution 1:
[Edit]: Here are the docs for transfer learning with NiftyNet.
This feature is currently being worked on. See here for full details.
Intended capabilities include the following:
- Command for printing all trainable variable names (with optional regular expression matching)
- Ability to randomly initialize a subset of variables, this subset is created by regex name matching
- Ability to restore (from an existing checkpoint) and continue updating a subset of the variables. If the optimization method is changed, deal with method-specific variables (e.g. momentum)
- Being able to restore (from an existing checkpoint) and freeze trained weights for the rest of the variables
- Saving all trainable variables after training
- Add configuration parameters for finetuning, variable name regex, unit tests
- A demo/tutorial
- Preprocess the checkpoints for compatibility issues
- Deal with batch norm and dropout layers (editing networks to remove batch norm variables)
Post a Comment for "How Do I Implement Transfer Learning In Niftynet?"