EMADL2CPP issueshttps://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues2024-02-05T12:17:13+01:00https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/123Schema validation fails when root model is not called 'MnistClassifier' (PyTo...2024-02-05T12:17:13+01:00Jonas DjurevciSchema validation fails when root model is not called 'MnistClassifier' (PyTorch Backend)# Remark
The training configuration of a network must be named `<componentName>_<networkInstanceName>` (where `componentName` is the name of the component containing the network instance, and `networkInstanceName` is the name of the netw...# Remark
The training configuration of a network must be named `<componentName>_<networkInstanceName>` (where `componentName` is the name of the component containing the network instance, and `networkInstanceName` is the name of the network instance inside that component). For example, the training configuration of the network in 'adanet_experiment' must be named `mnistClassifier_net`, since the component containing the network is called 'MnistClassifier' and the instance name is 'net'. If the configuration name does not follow that pattern, executing the generated Python script will fail.
# Problem
During schema validation, the name of the configuration is checked for hardcoded values (see `de.monticore.mlpipelines.automl.helper.ConfigurationValidationHandler.getScmName()`). This means, that schema validation only succeeds if the name of the configuration is 'Supervised' or starts with 'mnistClassifier'. The first option is not viable, as the name must follow the pattern from above (see remark). Therefore, the configuration must be named `mnistClassifier_<networkInstanceName>`, which consequently means that the root emadl model must be called 'MnistClassifier' or 'mnistClassifier'. Executing the EMADL2CPP generator on other models will therefore fail.
# Steps to reproduce
Execute the generator on an architecture, whose root model is not called 'MnistClassifier' or 'mnistClassifier'. Example execution:
- Main class: `de.monticore.lang.monticar.emadl.generator.MontiAnnaCli`
- Program arguments: `-m src/main/resources/calculator_experiment/emadl -r calculator.Connector -o target -b PYTORCH`
# Workaround
Until this issue is fixed, the method `getScmName()` (see above) must be modified to allow schema validation for configurations with custom names. To allow the execution above, the method must be modified as follows:
```
private static String getScmName(ASTConfiguration configuration) {
String scmName = configuration.getName();
if (scmName.startsWith("mnistClassifier") || scmName.startsWith("connector")) {
scmName = "Supervised";
} else if (scmName.equals("AdaNet")) {
scmName = "NeuralArchitectureSearch";
}
return scmName;
}
```Antonio AntovskiAntonio Antovskihttps://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/122Network Tagging is ignored (PyTorch Backend)2024-02-05T12:17:01+01:00Jonas DjurevciNetwork Tagging is ignored (PyTorch Backend)# Problem
Multiple instances of the same network are trained separately instead of only once, even though a tagging file is present. While the tagging file is parsed and used to download the dataset, it is not used to determine, which ne...# Problem
Multiple instances of the same network are trained separately instead of only once, even though a tagging file is present. While the tagging file is parsed and used to download the dataset, it is not used to determine, which network instances have to be trained (see method `de.monticore.mlpipelines.workflow.AbstractWorkflow.getNetworkInstanceConfigs`). This requires the provision of one training and one pipeline configuration file per network instance.
# Steps to reproduce
Note: Getting far enough to trigger this problem requires implementing the workaround presented in issue #123, given that it is not yet solved.
Execute the EMADL2CPP generator on the MNISTCalculator example application located in `src/main/resources/calculator_experiment`.
For this purpose, use the following Run Configuration:
- Main class: `de.monticore.lang.monticar.emadl.generator.MontiAnnaCli`
- Program arguments: `-m src/main/resources/calculator_experiment/emadl -r calculator.Connector -o target -b PYTORCH`Antonio AntovskiAntonio Antovskihttps://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/121Training loss is NaN sometimes (PyTorch Backend)2024-02-05T12:16:44+01:00Jonas DjurevciTraining loss is NaN sometimes (PyTorch Backend)# Problem
When running certain experiments, the training loss calculation in src/main/resources/experiments/steps/MySupervisedTrainer.py (line 80) sometimes returns 'NaN'. This results in debug messages like:
`Epoch:1 Train Loss:nan Trai...# Problem
When running certain experiments, the training loss calculation in src/main/resources/experiments/steps/MySupervisedTrainer.py (line 80) sometimes returns 'NaN'. This results in debug messages like:
`Epoch:1 Train Loss:nan Train Accuracy:10.03%`
# Steps to reproduce
Note: Getting far enough to trigger this problem requires implementing the workaround presented in issue #123, given that it is not yet solved.
This issue is not deterministic. However, when executing the EMADL2CPP generator as follows, there is a high chance of encountering the problem.
- Main class: `de.monticore.lang.monticar.emadl.generator.MontiAnnaCli`
- Program arguments: `-m src/main/resources/calculator_experiment/emadl -r calculator.Connector -o target -b PYTORCH`
Out of the six runs generated by this execution, typically 2-4 runs exhibit this behavior.
I've never encountered this issue with other experiments like the 'adanet_experiment' or 'squaredigit_experiment', even though all mentioned experiments use the same loss metric (cross entropy).Antonio AntovskiAntonio Antovskihttps://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/110Documentation of AutoML techniques2023-06-07T18:30:30+02:00aixaiDocumentation of AutoML techniquesExtend the Readme of EMADL2CPP.
Copy AutoML explainations from our thesisExtend the Readme of EMADL2CPP.
Copy AutoML explainations from our thesisTobias HörnschemeyerNazish QamarHiroshi HamanoAkashKumarDSTobias Hörnschemeyerhttps://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/114Fixing Job-Token in settings.xml2023-05-30T20:00:50+02:00Nazish QamarFixing Job-Token in settings.xmlNeed to add in settings.xml file
<name>Job-Token</name>
<value>${env.CI_JOB_TOKEN}</value>Need to add in settings.xml file
<name>Job-Token</name>
<value>${env.CI_JOB_TOKEN}</value>Nazish QamarNazish Qamarhttps://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/113train.h5 file too big, needs to replace with smaller version2023-05-30T19:06:52+02:00Nazish Qamartrain.h5 file too big, needs to replace with smaller versionAs the train file is 149 MB, it causes the problem in CI pipeline. We will again replace it with the subset of the original train dataset.As the train file is 149 MB, it causes the problem in CI pipeline. We will again replace it with the subset of the original train dataset.Nazish QamarNazish Qamarhttps://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/42AutoML: Architecture Search2023-05-29T13:29:06+02:00Evgeny KusmenkoAutoML: Architecture Search- Please extend the framework to optimize the MontiAnna neural architecture for a given learning problem
- create tests for your framework
- create a model in the MNISTCalculator project X
- create a CI experiment in the MNISTCalculator ...- Please extend the framework to optimize the MontiAnna neural architecture for a given learning problem
- create tests for your framework
- create a model in the MNISTCalculator project X
- create a CI experiment in the MNISTCalculator project
- please create an AutoML pipelineTobias HörnschemeyerNazish QamarTobias Hörnschemeyer2023-05-01https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/112Saving checkpoints during the model training (PyTorch backend)2023-05-25T16:14:07+02:00Nazish QamarSaving checkpoints during the model training (PyTorch backend)Problem: The PyTorch backend allows saving the trained model only after the complete training.
Improvement: The training pipeline should save the model checkpoints after each epoch. If one tries to run the incomplete experiment later, t...Problem: The PyTorch backend allows saving the trained model only after the complete training.
Improvement: The training pipeline should save the model checkpoints after each epoch. If one tries to run the incomplete experiment later, there should be a provision in the training pipeline to load the latest checkpoint and continue the training from there.aixaiaixaihttps://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/103How to relate specialized configuration files to particular EMADL network com...2023-04-29T17:21:50+02:00aixaiHow to relate specialized configuration files to particular EMADL network componentsUntil now the configuration file is mapped to a network by ts name.
If we introduce conf files like efficientname.conf the mapping gets lost (which network does it refer to?)Until now the configuration file is mapped to a network by ts name.
If we introduce conf files like efficientname.conf the mapping gets lost (which network does it refer to?)Tobias HörnschemeyerNazish QamarHiroshi HamanoAkashKumarDSTobias Hörnschemeyer2023-03-25https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/106Link pipeline configurations to their reference model2023-04-25T19:50:07+02:00Feras MulhemLink pipeline configurations to their reference modelThis issue aims to enable the toolchain to resolve the correct pipeline reference model based on the pipeline configuration. This mechanism is similar to resolving the training-time architectures using training configuration.
**Tasks**
...This issue aims to enable the toolchain to resolve the correct pipeline reference model based on the pipeline configuration. This mechanism is similar to resolving the training-time architectures using training configuration.
**Tasks**
- [ ] Add a schema definition that contains an entry for the desired reference model
- [ ] in the pipeline configuration add an entry that determines which schema shall be used ( for example _learning method_)
- [ ] Resolve the corresponding schema definitions for the given pipeline configuration
- [ ] Validate the pipeline configuration (similar to [training configurations](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/blob/master/src/main/java/de/monticore/mlpipelines/workflow/AbstractWorkflow.java#L139))
**Notes**
- The above-mentioneed steps shall be added as part of the commen [workflow ](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/blob/master/src/main/java/de/monticore/mlpipelines/workflow/AbstractWorkflow.java#L139 ) for ML pipelines.
- Currently, the workflow resolves a default pipeline reference modelTobias HörnschemeyerNazish QamarHiroshi HamanoAkashKumarDSTobias Hörnschemeyerhttps://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/111Restrict shrinking2023-04-25T19:31:37+02:00aixaiRestrict shrinkingDiscuss this further next time, decide what to do.Discuss this further next time, decide what to do.Nazish QamarNazish Qamar2023-03-25https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/53Adanet: create different component search strategies2023-04-24T14:28:05+02:00Tobias HörnschemeyerAdanet: create different component search strategiesCurrently, the candidate search only focuses on components with the same depth or with depth + 1, while depth is the depth of the best component in the last iteration.
There might be other strategies to find components.Currently, the candidate search only focuses on components with the same depth or with depth + 1, while depth is the depth of the best component in the last iteration.
There might be other strategies to find components.Tobias HörnschemeyerNazish QamarTobias Hörnschemeyer2023-03-31https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/65MontiAnna Configuration Model2023-03-24T23:15:00+01:00Evgeny KusmenkoMontiAnna Configuration Model- [x] Please create a meta-model of a MontiAnna configuration as a class diagram
- [x] Please define the SA model transformation performed on the metal model , e.g. using pseudo-code
- [x] Please define the model transformation of other ...- [x] Please create a meta-model of a MontiAnna configuration as a class diagram
- [x] Please define the SA model transformation performed on the metal model , e.g. using pseudo-code
- [x] Please define the model transformation of other hyperparameter search algorithms performed on the configuration metal model , e.g. using pseudo-code
- [x] Please create object diagrams conforming to the meta-model for several steps of SA and other algorithms, cf. [OD ticket](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/63)Hiroshi HamanoAkashKumarDSHiroshi Hamano2023-03-10https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/59Fix ONNX pipeline2023-03-09T19:35:22+01:00Evgeny KusmenkoFix ONNX pipelineHi Lukas, it seems that one of your merges has broken the [ONNX pipeline](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/pipelines/841093), could you please look into it.Hi Lukas, it seems that one of your merges has broken the [ONNX pipeline](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/pipelines/841093), could you please look into it.Evgeny KusmenkoLukas BramEvgeny Kusmenko2023-03-10https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/60Model for Hyperparameter Optimization2023-03-04T11:02:43+01:00Evgeny KusmenkoModel for Hyperparameter Optimization- please create an experiment in the mnistcalculator project to show which files are needed, how the project should be organized and where the hyperparam space is defined- please create an experiment in the mnistcalculator project to show which files are needed, how the project should be organized and where the hyperparam space is definedHiroshi HamanoAkashKumarDSHiroshi Hamano2023-02-28https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/63Object diagrams for model transformations Configuration2023-03-04T11:01:11+01:00Evgeny KusmenkoObject diagrams for model transformations ConfigurationPlease create object diagrams representing the AST/symbol table of the configuration for several steps of each hyperparameter search algorithm you implementPlease create object diagrams representing the AST/symbol table of the configuration for several steps of each hyperparameter search algorithm you implementHiroshi HamanoAkashKumarDSHiroshi Hamano2023-02-28https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/43AutoML: Hyperparameter Search2023-03-04T10:41:15+01:00Evgeny KusmenkoAutoML: Hyperparameter Search- Please extend the framework to optimize the MontiAnna hyperparameters for a given learning problem
- extend the framework to automatically exchange pipeline components, e.g. exchange image preprocessing components
- create tests for yo...- Please extend the framework to optimize the MontiAnna hyperparameters for a given learning problem
- extend the framework to automatically exchange pipeline components, e.g. exchange image preprocessing components
- create tests for your framework
- create a model in the MNISTCalculator project X
- create a CI experiment in the MNISTCalculator projectHiroshi HamanoAkashKumarDSHiroshi Hamano2023-05-01https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/104Make conf files use package structure2023-03-04T10:37:37+01:00aixaiMake conf files use package structureFor now in Feras' work conf package is encoded as part of the conf file name. Make it equivalent to components. Use package keyword.For now in Feras' work conf package is encoded as part of the conf file name. Make it equivalent to components. Use package keyword.Tobias HörnschemeyerNazish QamarHiroshi HamanoAkashKumarDSTobias Hörnschemeyer2023-03-03https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/64MontiAnna Meta-Model2023-03-04T09:53:26+01:00Evgeny KusmenkoMontiAnna Meta-Model- [x] Please create a meta-model of a MontiAnna neural architecture as a class diagram
- [x] Please define the AdaNet model transformation performed on the metal model , e.g. using pseudo-code
- [x] Please define the EfficientNet model t...- [x] Please create a meta-model of a MontiAnna neural architecture as a class diagram
- [x] Please define the AdaNet model transformation performed on the metal model , e.g. using pseudo-code
- [x] Please define the EfficientNet model transformation performed on the neural architeture metal model , e.g. using pseudo-code
- [x] Please create object diagrams conforming to the meta-model for several steps of AdaNet / EfficientNet, cf. [OD ticket](https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/62)
- [x] Make set of slides available for the thesis (Create pictures or emadl descriptions for the object diagramm slides)Tobias HörnschemeyerNazish QamarTobias Hörnschemeyer2023-02-28https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/issues/105Make python component library usable as a central resource (stored in EMADL2CPP)2023-02-22T12:25:45+01:00aixaiMake python component library usable as a central resource (stored in EMADL2CPP)Currently we need to create python files for the pipeline components in the projects (cf mnistdetector -> library). Please make sure we have some predefined components packaged with EMADL2Cpp (or CNNArch2Pytorch or any other meaningful p...Currently we need to create python files for the pipeline components in the projects (cf mnistdetector -> library). Please make sure we have some predefined components packaged with EMADL2Cpp (or CNNArch2Pytorch or any other meaningful project) which can be used from there without having to manually copying them to the application projectTobias HörnschemeyerNazish QamarHiroshi HamanoAkashKumarDSTobias Hörnschemeyer2023-02-18