Commit e35bc3b0 authored by Evgeny Kusmenko's avatar Evgeny Kusmenko

Merge branch 'develop' into 'master'

Develop

See merge request !38
parents c1ba643a 23865315
File mode changed from 100755 to 100644
......@@ -356,7 +356,7 @@ architecture RNNencdec {
```
## Predefined Layers
All methods with the exception of *Concatenate*, *Add*, *Get* and *Split* can only handle 1 input stream and have 1 output stream.
All methods with the exception of *Concatenate*, *Add*, *Get* and *Split*, EpisodicMemory, DotProductSelfAttention and potentially LoadNetwork can only handle 1 input stream and have 1 output stream.
All predefined methods start with a capital letter and all constructed methods have to start with a lowercase letter.
* **FullyConnected(units, no_bias=false, flatten=true)**
......@@ -585,6 +585,7 @@ All predefined methods start with a capital letter and all constructed methods h
* **n** (integer > 0, required): How often to copy the entries of the given axis
* **axis** (-1 <= integer <= 2, optional, default=-1): The axis to use for copying. Uses the last axis (-1) by default.
* **Reshape(shape)**
Transforms the input tensor into a different shape, while keeping the number of total entries in the tensor.
......@@ -602,6 +603,61 @@ All predefined methods start with a capital letter and all constructed methods h
* **padding** ({"valid", "same", "no_loss"}, optional, default="same"): One of "valid", "same" or "no_loss". "valid" means no padding. "same" results in padding the input such that the output has the same length as the original input divided by the stride (rounded up). "no_loss" results in minimal padding such that each input is used by at least one filter (identical to "valid" if *stride* equals 1).
* **no_bias** (boolean, optional, default=false): Whether to disable the bias parameter.
* **EpisodicMemory(replayMemoryStoreProb=1, maxStoredSamples=-1, memoryReplacementStrategy="replace_oldest", useReplay=true, replayInterval, replayBatchSize=-1, replaySteps, replayGradientSteps=1, useLocalAdaption=true, localAdaptionGradientSteps=1, localAdaptionK=1, queryNetDir=-1, queryNetPrefix=-1, queryNetNumInputs=1)**
Episodic Memory as described in [1], although we implemented it as a layer which can also be used inside of a network. Works with multiple inputs. Not learned. Inputs are stored and are repleayed on part of the network following this layer. Local adaption retrives samples to the sample for which inference should be done from memory and performs finetuning learning with these on the part of the network following this layer.
* **replayMemoryStoreProb** ( 0 <= integer <= 1, optional, default=1): Probability with which a sample seen during training will be stored in memory (per actual sample, not batch).
* **maxStoredSamples** (integer > 0 or -1, optional, default=-1): Maximum number of samples stored in memory. If -1 use unlimeted memory (Watch out for your ram or gpu memory).
* **memoryReplacementStrategy** ("replace_oldest" or "no_replacement", optional, default="replace_oldest"): Strategy to use when memory is full (maxStoredSamples is reached. Either "replace_oldest" for replacing the oldest samples, or "no_replacement" for no replacement.
* **useReplay** (boolean, optional, default=true): Whether to use replay portion of this layer.
* **replayInterval** (integer > 0, required): the intervall of batches after which to perform replay.
* **replayBatchSize** (integer > 0 or -1, optional, default=-1): the batch size per replay step replay (number of samples taken from memory per replay step). If -1 use training batch size.
* **replaySteps** (integer > 0, required): how many batches of replay to perform per replay intervall.
* **replayGradientSteps** (integer > 0, optional, default=1): how many gradient updates to perform per replay batch.
* **useLocalAdaption** (boolean, optional, default=true): Whether to use local adaption portion of this layer.
* **localAdaptionGradientSteps** (Integer > 0, optional, default=1): how many gradient updates to perform during local adaption.
* **localAdaptionK** (integer > 0, optional, default=1): number of samples taken from memory for local adaption.
* **queryNetDir** (layerPathParameterTag or path (string), required): the relative path to the directory in which the query network lies
* **queryNetPrefix** (layerPathParameterTag or path (string), required):
* **queryNetNumInputs** (Integer > 0, required): name of the query network to load. This is a prefix of the file names, e.g. for Gluon there willl be two files (symbol and params) wich start with networkName- (the prefix would be networkName- with the -) followed by epoch number (param file) and file endings.
* **LargeMemory(storeDistMeasure="inner_prod", subKeySize, querySize=512, queryAct="linear", k, numHeads=1, valuesDim=-1)**
A learnable value key memory as described in [2]. Takes one input which is used to calculate the query.
* **storeDistMeasure** ("inner_prod", "l2" or "random", optional, default="inner_prod"): which distance measure to use between querrie and keys.
* **subKeySize** (integer > 0, required): dimension of the sub key vectors.
* **querySize** (integer or Integer tuple > 0, optional, default=512): If an integer the dimension of the query vector (one layer query network), if a tuple the dimensions of the layers of the query network, last entry dimension of the query vector.
* **queryAct** (activation type, optional, default="linear"): The activation to use in the query network (the same for all layers).
* **k** (integer > 0, required): how many top_k matches to extract form memory for averaging.
* **numHeads** (integer > 0, optional, defualt=1): number of heads to use (parrallel computations to use, aggregated in the end).
* **valuesDim** (integer > 0 or -1, optinal, default=-1): the dimension of a value vector. For -1 this will be query size or its last value.
* **DotProductSelfAttention(scaleFactor=-1, numHeads=1, dimKeys=-1, dimValues=-1, useProjBias=true, useMask=false)**
Calculates the DotProductSelfAttention. As described in [3]. Takes three inputs: queries, keys, values and optionally a mask for masked Self Attention as used in Bert.
* **scaleFactor** (integer > 0 or -1, optional, default=-1): factor to scale the score of the dot porduct of querries and keys, for -1 this is sqrt(dimKeys).
* **numHeads** (integer > 0, optional, default=1): how many attention heads to use (parrellel attention calculations over the inputs).
* **dimKeys** (integer > 0 or -1, optional, default=-1): the dimension of the keys and queries after initial linear transformation into a vector. For -1 this is the product of the input dimensions of the input querries devided by numHeads.
* **dimValues** (integer > 0 or -1, optional, default=-1): the dimension of the values after initial linear transformation into a vector. For -1 this is the product of the input dimensions of the input querries devided by numHeads.
* **useProjBias** (boolean, optional, default=true): wether to use a bias in the linear transformations in this layer.
* **useMask** (boolean, optional, default=false): wether to perform masked self attention.
* **LoadNetwork(networkDir, networkPrefix, numInputs, outputShape)**
Loads a pretrained network as a layer in the own network. Will then be jointly trained further with the rest of the own network. Can accept multiple inputs but currently just one output.
* **networkDir** (layerPathParameterTag or path (string), required): the relative path to the directory in which the network lies
* **networkPrefix** (string, required): name of the network to load. This is a prefix of the file names, e.g. for Gluon there willl be two files (symbol and params) wich start with networkName- (the prefix would be networkName- with the -) followed by epoch number (param file) and file endings.
* **numInputs** (integer > 0, required): number of inputs the loaded network expects.
* **outputShape** (integer tuple > 0, reqiured): The expected shape of the output. If the network does not provide this shape, it it will be transformed with a dense layer and a reshape.
## Predefined Unroll Types
* **GreedySearch(max_length)**
......@@ -617,3 +673,10 @@ All predefined methods start with a capital letter and all constructed methods h
* **max_length** (integer > 0, required): The maximum number of timesteps to run the RNN, and thus the maximum length of the generated sequence.
* **width** (integer > 0, required): The number of candidates to consider each in timestep. Sometimes called k.
## Refrences
[1] Cyprien De Masson, Sebastian Ruder, Lingpeng Kong and Dani Yogatama, "Episodic Memory in Lifelong Language Learning", Proc. NeurIPS, 2019
[2] Guillaume Lample, Alexandre Sablayrolles, Marc’Aurelio Ranzato, Ludovic Denoyer and Hervé Jégou, "Large memory layers with product keys", Proc. NeurIPS, Dec. 2019.
[3] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser and Illia Polosukhin, "Attention is all you need", Proc. NeurIPS, 2017
......@@ -19,7 +19,7 @@
<groupId>de.monticore.lang.monticar</groupId>
<artifactId>cnn-arch</artifactId>
<version>0.3.6-SNAPSHOT</version>
<version>0.3.7-SNAPSHOT</version>
......
......@@ -60,6 +60,8 @@ public class CNNArchCocos {
.addCoCo(new CheckLayerVariableDeclarationLayerType())
.addCoCo(new CheckLayerVariableDeclarationIsUsed())
.addCoCo(new CheckConstants())
.addCoCo(new CheckLargeMemoryLayer())
.addCoCo(new CheckEpisodicMemoryLayer())
.addCoCo(new CheckUnrollInputsOutputsTooMany());
}
......
/**
*
* (c) https://github.com/MontiCore/monticore
*
* The license generally applicable for this project
* can be found under https://github.com/MontiCore/monticore.
*/
/* (c) https://github.com/MontiCore/monticore */
package de.monticore.lang.monticar.cnnarch._cocos;
import de.monticore.lang.monticar.cnnarch._symboltable.StreamInstructionSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.ArchitectureElementSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.ParallelCompositeElementSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.SerialCompositeElementSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.LayerSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.ArgumentSymbol;
import de.monticore.lang.monticar.cnnarch.helper.ErrorCodes;
import de.se_rwth.commons.logging.Log;
import java.util.Optional;
import java.util.List;
import java.io.File;
public class CheckEpisodicMemoryLayer extends CNNArchSymbolCoCo {
@Override
public void check(StreamInstructionSymbol stream) {
List<ArchitectureElementSymbol> elements = stream.getBody().getElements();
for (ArchitectureElementSymbol element : elements) {
if (element instanceof ParallelCompositeElementSymbol) {
checkForEpisodicMemory((ParallelCompositeElementSymbol) element);
}
}
}
protected void checkForEpisodicMemory(ParallelCompositeElementSymbol parallelElement) {
for (ArchitectureElementSymbol subStream : parallelElement.getElements()) {
if (subStream instanceof SerialCompositeElementSymbol) { //should always be the case
for (ArchitectureElementSymbol element : ((SerialCompositeElementSymbol) subStream).getElements()) {
if (element instanceof ParallelCompositeElementSymbol) {
checkForEpisodicMemory((ParallelCompositeElementSymbol) element);
} else if (element.getName().equals("EpisodicMemory")) {
Log.error("0" + ErrorCodes.INVALID_EPISODIC_MEMORY_LAYER_PLACEMENT +
" Invalid placement of EpisodicMemory layer. It can't be placed inside a Prallalel execution block.",
element.getSourcePosition());
}
}
}
}
}
}
/**
*
* (c) https://github.com/MontiCore/monticore
*
* The license generally applicable for this project
* can be found under https://github.com/MontiCore/monticore.
*/
package de.monticore.lang.monticar.cnnarch._cocos;
import de.monticore.lang.monticar.cnnarch._symboltable.ArchitectureElementSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.LayerSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.ArgumentSymbol;
import de.monticore.lang.monticar.cnnarch.helper.ErrorCodes;
import de.se_rwth.commons.logging.Log;
import java.util.Optional;
import java.util.List;
public class CheckLargeMemoryLayer extends CNNArchSymbolCoCo {
@Override
public void check(ArchitectureElementSymbol sym) {
if (sym instanceof LayerSymbol && sym.getName().equals("LargeMemory")) {
checkLargeMemoryLayer((LayerSymbol) sym);
}
}
protected void checkLargeMemoryLayer(LayerSymbol layer) {
List<ArgumentSymbol> arguments = layer.getArguments();
Integer subKeySize = new Integer(0);
Integer k = new Integer(0);
for (ArgumentSymbol arg : arguments) {
if (arg.getName().equals("subKeySize")) {
subKeySize = arg.getRhs().getIntValue().get();
} else if (arg.getName().equals("k")) {
k = arg.getRhs().getIntValue().get();
}
}
if (subKeySize < k) {
Log.error("0" + ErrorCodes.INVALID_LARGE_MEMORY_LAYER_PARAMETERS +
" Invalid LargeMemory layer Parameter values, subKeySize has to be greater or equal to k. ",
layer.getSourcePosition());
}
}
}
/**
*
* (c) https://github.com/MontiCore/monticore
*
* The license generally applicable for this project
* can be found under https://github.com/MontiCore/monticore.
*/
/* (c) https://github.com/MontiCore/monticore */
package de.monticore.lang.monticar.cnnarch._cocos;
import de.monticore.lang.monticar.cnnarch.helper.ErrorCodes;
import de.monticore.lang.monticar.cnnarch._symboltable.LayerSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.ArgumentSymbol;
import de.se_rwth.commons.logging.Log;
import java.io.File;
import java.util.*;
public class CheckLayerPathParameter {
public static void check(LayerSymbol sym, String path, String tag, HashMap layerPathParameterTags) {
checkTag(sym, tag, layerPathParameterTags);
checkPath(sym, path);
}
protected static void checkTag(LayerSymbol layer, String tag, HashMap layerPathParameterTags){
if (!tag.equals("") && !layerPathParameterTags.containsKey(tag)) {
Log.error("0" + ErrorCodes.INVALID_LAYER_PATH_PARAMETER_TAG +
"The LayerPathParameter tag " + tag + " was not found.",
layer.getSourcePosition());
}
}
protected static void checkPath(LayerSymbol layer, String path) {
File dir = new File(path);
if (dir.exists()) {
return;
}
Log.error("0" + ErrorCodes.INVALID_LAYER_PATH_PARAMETER_PATH +
" For the concatination of queryNetDir and queryNetPrefix exists no file which path has this as prefix.",
layer.getSourcePosition());
}
}
......@@ -74,7 +74,18 @@ abstract public class ArchExpressionSymbol extends CommonSymbol {
public boolean isString(){
if (getValue().isPresent()){
return getStringValue().isPresent();
Optional<String> stringValue = getStringValue();
if (stringValue.isPresent() && !stringValue.get().startsWith("tag:")) {
return getStringValue().isPresent();
}
}
return false;
}
public boolean isStringTag(){
if (getValue().isPresent()) {
Optional<String> stringValue = getStringValue();
return stringValue.isPresent() && stringValue.get().startsWith("tag:");
}
return false;
}
......
......@@ -30,7 +30,6 @@ public class ArchTypeSymbol extends CommonSymbol {
private int widthIndex = -1;
private List<ArchSimpleExpressionSymbol> dimensions = new ArrayList<>();
public ArchTypeSymbol() {
super("", KIND);
ASTElementType elementType = new ASTElementType();
......@@ -146,7 +145,7 @@ public class ArchTypeSymbol extends CommonSymbol {
}
return dimensionList;
}
public Set<ParameterSymbol> resolve() {
if (!isResolved()){
if (isResolvable()){
......
......@@ -157,6 +157,7 @@ public abstract class ArchitectureElementSymbol extends ResolvableSymbol {
else {
return Optional.empty();
}
}
/**
......
......@@ -14,11 +14,14 @@ package de.monticore.lang.monticar.cnnarch._symboltable;
import de.monticore.lang.monticar.cnnarch.helper.Utils;
import de.monticore.lang.monticar.cnnarch.predefined.AllPredefinedLayers;
import de.monticore.lang.monticar.cnnarch.predefined.AllPredefinedVariables;
import de.monticore.lang.monticar.cnnarch._cocos.CheckLayerPathParameter;
import de.monticore.symboltable.CommonScopeSpanningSymbol;
import de.monticore.symboltable.Scope;
import de.monticore.symboltable.Symbol;
import org.apache.commons.math3.ml.neuralnet.Network;
import java.lang.RuntimeException;
import java.lang.NullPointerException;
import java.util.*;
public class ArchitectureSymbol extends CommonScopeSpanningSymbol {
......@@ -214,5 +217,92 @@ public class ArchitectureSymbol extends CommonScopeSpanningSymbol {
return copy;
}
public void processLayerPathParameterTags(HashMap layerPathParameterTags){
for(NetworkInstructionSymbol networkInstruction : networkInstructions){
List<ArchitectureElementSymbol> elements = networkInstruction.getBody().getElements();
processElementsLayerPathParameterTags(elements, layerPathParameterTags);
}
}
public void processElementsLayerPathParameterTags(List<ArchitectureElementSymbol> elements, HashMap layerPathParameterTags){
for (ArchitectureElementSymbol element : elements){
if (element instanceof SerialCompositeElementSymbol || element instanceof ParallelCompositeElementSymbol){
processElementsLayerPathParameterTags(((CompositeElementSymbol) element).getElements(), layerPathParameterTags);
}else if (element instanceof LayerSymbol){
for (ArgumentSymbol param : ((LayerSymbol) element).getArguments()){
boolean isPathParam = false;
if (param.getParameter() != null) {
for (Constraints constr : param.getParameter().getConstraints()) {
if (constr.name().equals("PATH_TAG_OR_PATH")) {
isPathParam = true;
}
}
}
if (isPathParam){
String paramValue = param.getRhs().getStringValue().get();
if (paramValue.startsWith("tag:")) {
String pathTag = param.getRhs().getStringValue().get().split(":")[1];
String path = (String) layerPathParameterTags.get(pathTag);
param.setRhs(ArchSimpleExpressionSymbol.of(path));
CheckLayerPathParameter.check((LayerSymbol) element, path, pathTag, layerPathParameterTags);
}else{
CheckLayerPathParameter.check((LayerSymbol) element, paramValue, "", layerPathParameterTags);
}
}
}
}
}
}
public void processForEpisodicReplayMemory(){
for(NetworkInstructionSymbol networkInstruction : networkInstructions){
List<ArchitectureElementSymbol> elements = networkInstruction.getBody().getElements();
List<ArchitectureElementSymbol> elementsNew = new ArrayList<>();
List<List<ArchitectureElementSymbol>> episodicSubNetworks = new ArrayList<>(new ArrayList<>());
List<ArchitectureElementSymbol> currentEpisodicSubNetworkElements = new ArrayList<>();
for (ArchitectureElementSymbol element : elements){
if (AllPredefinedLayers.EPISODIC_REPLAY_LAYER_NAMES.contains(element.getName())) {
boolean use_replay = false;
boolean use_local_adaption = false;
for (ArgumentSymbol arg : ((LayerSymbol)element).getArguments()){
if (arg.getName().equals(AllPredefinedLayers.USE_REPLAY_NAME) && (boolean)arg.getRhs().getValue().get()){
use_replay = true;
break;
}else if (arg.getName().equals(AllPredefinedLayers.USE_LOCAL_ADAPTION_NAME) && (boolean)arg.getRhs().getValue().get()){
use_local_adaption = true;
break;
}
}
if (!use_replay && !use_local_adaption) {
for (ParameterSymbol param : ((LayerSymbol) element).getDeclaration().getParameters()) {
if (param.getName().equals(AllPredefinedLayers.USE_REPLAY_NAME) &&
(boolean) param.getDefaultExpression().get().getValue().get()) {
use_replay = true;
break;
} else if (param.getName().equals(AllPredefinedLayers.USE_LOCAL_ADAPTION_NAME) &&
(boolean) param.getDefaultExpression().get().getValue().get()) {
use_local_adaption = true;
break;
}
}
}
if (use_replay || use_local_adaption){
if (!currentEpisodicSubNetworkElements.isEmpty()){
episodicSubNetworks.add(currentEpisodicSubNetworkElements);
}
currentEpisodicSubNetworkElements = new ArrayList<>();
}
}
currentEpisodicSubNetworkElements.add(element);
}
if (!currentEpisodicSubNetworkElements.isEmpty() && !episodicSubNetworks.isEmpty()){
episodicSubNetworks.add(currentEpisodicSubNetworkElements);
}
networkInstruction.getBody().setEpisodicSubNetworks(episodicSubNetworks);
}
}
}
......@@ -30,9 +30,8 @@ public class ArgumentSymbol extends CommonSymbol {
}
public ParameterSymbol getParameter() {
if (parameter == null){
if (parameter == null && getEnclosingScope() != null){
Symbol spanningSymbol = getEnclosingScope().getSpanningSymbol().get();
if (spanningSymbol instanceof UnrollInstructionSymbol) {
UnrollInstructionSymbol unroll = (UnrollInstructionSymbol) getEnclosingScope().getSpanningSymbol().get();
......
......@@ -17,17 +17,18 @@ public abstract class CompositeElementSymbol extends ArchitectureElementSymbol {
protected List<ArchitectureElementSymbol> elements = new ArrayList<>();
public CompositeElementSymbol() {
super("");
setResolvedThis(this);
}
abstract protected void setElements(List<ArchitectureElementSymbol> elements);
public List<ArchitectureElementSymbol> getElements() {
return elements;
}
abstract protected void setElements(List<ArchitectureElementSymbol> elements);
@Override
public boolean isAtomic() {
return getElements().isEmpty();
......
......@@ -48,6 +48,26 @@ public enum Constraints {
return "a boolean";
}
},
STRING {
@Override
public boolean isValid(ArchSimpleExpressionSymbol exp) {
return exp.isString();
}
@Override
public String msgString() {
return "a string";
}
},
PATH_TAG_OR_PATH {
@Override
public boolean isValid(ArchSimpleExpressionSymbol exp) {
return exp.isString() || exp.isStringTag();
}
@Override
public String msgString() {
return "a path tag or a path string";
}
},
TUPLE {
@Override
public boolean isValid(ArchSimpleExpressionSymbol exp) {
......@@ -68,6 +88,16 @@ public enum Constraints {
return "a tuple of integers";
}
},
INTEGER_OR_INTEGER_TUPLE {
@Override
public boolean isValid(ArchSimpleExpressionSymbol exp) {
return exp.isInt().get() || exp.isIntTuple().get();
}
@Override
public String msgString() {
return "an integer or tuple of integers";
}
},
POSITIVE {
@Override
public boolean isValid(ArchSimpleExpressionSymbol exp) {
......@@ -90,6 +120,28 @@ public enum Constraints {
return "a positive number";
}
},
POSITIVE_OR_MINUS_ONE {
@Override
public boolean isValid(ArchSimpleExpressionSymbol exp) {
if (exp.getDoubleValue().isPresent()){
return exp.getDoubleValue().get() > 0 || exp.getDoubleValue().get() == -1;
}
else if (exp.getDoubleTupleValues().isPresent()){
boolean isPositive = true;
for (double value : exp.getDoubleTupleValues().get()){
if (value < -1 || value == 0){
isPositive = false;
}
}
return isPositive;
}
return false;
}
@Override
public String msgString() {
return "a positive number";
}
},
NON_NEGATIVE {
@Override
public boolean isValid(ArchSimpleExpressionSymbol exp) {
......@@ -191,6 +243,71 @@ public enum Constraints {
+ AllPredefinedLayers.POOL_AVG;
}
},
ACTIVATION_TYPE {
@Override
public boolean isValid(ArchSimpleExpressionSymbol exp) {
Optional<String> optString= exp.getStringValue();
if (optString.isPresent()){
if (optString.get().equals(AllPredefinedLayers.MEMORY_ACTIVATION_LINEAR)
|| optString.get().equals(AllPredefinedLayers.MEMORY_ACTIVATION_RELU)
|| optString.get().equals(AllPredefinedLayers.MEMORY_ACTIVATION_TANH)
|| optString.get().equals(AllPredefinedLayers.MEMORY_ACTIVATION_SIGMOID)
|| optString.get().equals(AllPredefinedLayers.MEMORY_ACTIVATION_SOFTRELU)
|| optString.get().equals(AllPredefinedLayers.MEMORY_ACTIVATION_SOFTSIGN)){
return true;
}
}
return false;
}
@Override
protected String msgString() {
return AllPredefinedLayers.MEMORY_ACTIVATION_LINEAR + " or "
+ AllPredefinedLayers.MEMORY_ACTIVATION_RELU + " or "
+ AllPredefinedLayers.MEMORY_ACTIVATION_TANH + " or "
+ AllPredefinedLayers.MEMORY_ACTIVATION_SIGMOID + " or "
+ AllPredefinedLayers.MEMORY_ACTIVATION_SOFTRELU + " or "
+ AllPredefinedLayers.MEMORY_ACTIVATION_SOFTSIGN;
}
},
DIST_MEASURE_TYPE {
@Override
public boolean isValid(ArchSimpleExpressionSymbol exp) {
Optional<String> optString= exp.getStringValue();
if (optString.isPresent()){
if (optString.get().equals(AllPredefinedLayers.L2)
|| optString.get().equals(AllPredefinedLayers.INNER_PROD)){
return true;
}
}
return false;
}
@Override