Unverified Commit 228b6e55 authored by Thomas Michael Timmermanns's avatar Thomas Michael Timmermanns Committed by GitHub

Merge pull request #14 from EmbeddedMontiArc/timmermanns

Timmermanns
parents 12c7fe16 6b579a61
# CNNArch
[![Maintainability](https://api.codeclimate.com/v1/badges/fc45309cb83a31c9586e/maintainability)](https://codeclimate.com/github/EmbeddedMontiArc/CNNArchLang/maintainability)
[![Build Status](https://travis-ci.org/EmbeddedMontiArc/CNNArchLang.svg?branch=master)](https://travis-ci.org/EmbeddedMontiArc/CNNArchLang)
[![Build Status](https://circleci.com/gh/EmbeddedMontiArc/CNNArchLang/tree/master.svg?style=shield&circle-token=:circle-token)](https://circleci.com/gh/EmbeddedMontiArc/CNNArchLang/tree/master)
[![Coverage Status](https://coveralls.io/repos/github/EmbeddedMontiArc/CNNArchLang/badge.svg?branch=master)](https://coveralls.io/github/EmbeddedMontiArc/CNNArchLang?branch=master)
# CNNArch
**work in progress**
## Introduction
CNNArch is a declarative language to build architectures of feedforward neural networks with a special focus on convolutional neural networks. It is being developed for use in the MontiCar language family, along with CNNTrain which configures the training of the network and EmbeddedMontiArcDL which combines the languages into a EmbeddedMontiArc component.
The inputs and outputs of a network are strongly typed and the validity of a network is checked at compile time.
In the following, we will explain the syntax and all features of CNNArch in combination with code examples to show how these can be used.
## Basic Structure
The syntax of this language has many similarities to python in the way how variables and methods are handled. There exist three types of variables: constants, parameters and IO-variables. There all are seemingly untyped. However, the correctness of their values is checked at compile time. Constants can be defined by the user in the declaration part of the architecture (top part). IO-variables are variables which define the shape of inputs and outputs. They are initialized in the architecture but can be changed from the outside to fit the actual inputs and outputs of a component. If this is not wanted, the shape of inputs and outputs can also be defined with fixed values. The main part is the actual definition of the architecture in the form of a collection of layers which are connected through the two operators "->" and "|". A layer can either be a method, an input or an output. The following is a complete example of the original version of Alexnet by A. Krizhevsky. There are more compact versions of the same architecture but we will get to that later. All predefined methods are listed at the end of this document.
```
architecture Alexnet_simple{
def input Z(0:255)^{h=224,w=224,c=3} image
def output Q(0:1)^{classes=10} predictions
image ->
Convolution(kernel=(11,11), channels=96, stride=(4,4), padding="no_loss") ->
Lrn(nsize=5, alpha=0.0001, beta=0.75) ->
MaxPooling(kernel=(3,3), stride=(2,2), padding="no_loss") ->
Relu() ->
Split(n=2) ->
(
[0] ->
Convolution(kernel=(5,5), channels=128) ->
Lrn(nsize=5, alpha=0.0001, beta=0.75) ->
MaxPooling(kernel=(3,3), stride=(2,2), padding="no_loss") ->
Relu()
|
[1] ->
Convolution(kernel=(5,5), channels=128) ->
Lrn(nsize=5, alpha=0.0001, beta=0.75) ->
MaxPooling(kernel=(3,3), stride=(2,2), padding="no_loss") ->
Relu()
) ->
Concatenate() ->
Convolution(kernel=(3,3), channels=384) ->
Relu() ->
Split(n=2) ->
(
[0] ->
Convolution(kernel=(3,3), channels=192) ->
Relu() ->
Convolution(kernel=(3,3), channels=128) ->
MaxPooling(kernel=(3,3), stride=(2,2), padding="no_loss") ->
Relu()
|
[1] ->
Convolution(kernel=(3,3), channels=192) ->
Relu() ->
Convolution(kernel=(3,3), channels=128) ->
MaxPooling(kernel=(3,3), stride=(2,2), padding="no_loss") ->
Relu()
) ->
Concatenate() ->
FullyConnected(units=4096) ->
Relu() ->
Dropout() ->
FullyConnected(units=4096) ->
Relu() ->
Dropout() ->
FullyConnected(units=classes) ->
Softmax() ->
predictions
}
```
*Note: The third convolutional and the first two fully connected layers are not divided into two streams like they are in the original Alexnet. This is done for the sake of simplicity. However, this change should not affect the actual computation.*
## Layer Operators
The architecture does not use symbols to denote a connections between layers like most deep learning frameworks but instead uses a approach which describes the data flow through the network. The first operator is the serial connection "->". The operator simply connects the output of the first layer to the input of the second layer. Despite being sequential in nature, CNNArch is still able to describe complex networks like ResNeXt through the use of the parallelization operator "|". This operator splits the network into parallel data streams. Each stream in a parallel block has the same input as the whole block. The output of a parallel block is a list of streams which can be merged into a single stream through use of the following methods: `Convolution()`, `Add()` or `Get(index)`. Note: `Get(index=i)` can be abbreviated by `[i]`. The method `Split(n)` in the example above creates multiple output streams from a single input stream by splitting the data itself into *n* streams which can then handled separately.
## Inputs and Outputs
An architecture in CNNArch can have multiple inputs and outputs. Multiple inputs (or outputs) of the same form can be initialized as arrays. The declaration can look like the following:
```
def input Z(0:255)^{h=200,w=300,c=3} image[2]
def input Q(-oo:+oo)^{10} additionalData
def output Q(0:1)^{classes=3} predictions
```
The first line defines the input *image* as an array of two rgb (or bgr) images with a resolution of 300 x 200. The part `Z(0:255)`, which corresponds to the type definition in EmbeddedMontiArc, restricts the values to integers between 0 and 255. The following part `{h=200,w=300,c=3}` declares the shape of the input. The shape denotes the dimensionality in form of height, width and depth(number of channels). Here, the height is initialized as 200, the width as 300 and the number of channels is 3. Height, width and depth can either be initialized variables or fixed values. The second line defines another input with 10 fixed dimensions and arbitrary rational values. The last line defines an output as the probability of three classes.
If an input or output is an array, it can be used in the architecture in two different ways. Either a single element is accessed or the array is used as a whole. The line `image[0] ->` would access the first image of the array and `image ->` would directly result in 2 output streams. In fact, the line `image ->` is identical to `(image[0] | image[1]) ->`. Furthermore, assuming *out* is a output array of size 2, the line `-> out` would be identical to `-> ([0]->image[0] | [1]->image[1])`. Inputs and outputs can also be used in the middle of an architecture. In general, inputs create new streams and outputs consume existing streams.
## Constants
The following example uses the two constants `a` and `b`:
```
architecture ExampleNetwork{
def input Q(-oo:+oo)^{inputs=10} in
def output Q(0:1)^{classes=2} out
a = inputs * 3 + classes
b = 64
in ->
FullyConnected(units=a) ->
Tanh() ->
FullyConnected(units=b) ->
Tanh() ->
FullyConnected(units=classes) ->
Softmax() ->
out
}
```
## Methods
It is possible to avoid redundancy in the architecture through the declaration of new methods. The method declaration is similar to python. Each parameter can have a default value that makes it an optional argument. The method call is also similar to python but, in contrast to python, it is necessary to specify the name of each argument. The body of a new method is constructed from other layers including other user-defined methods. However, recursion is not allowed. The compiler will throw an error if recursion occurs. The following is a example of multiple method declarations.
```
def conv(filter, channels, stride=1, act=true){
Convolution(kernel=(filter,filter), channels=channels, stride=(stride,stride)) ->
BatchNorm() ->
Relu(If=act)
}
def skip(channels, stride){
Convolution(kernel=(1,1), channels=channels, stride=(stride,stride)) ->
BatchNorm()
}
def resLayer(channels, stride=1){
(
conv(filter=3, channels=channels, stride=stride) ->
conv(filter=3, channels=channels, act=false)
|
skip(channels=channels, stride=stride, If=(stride!=1))
) ->
Add() ->
Relu()
}
```
The method `resLayer` in this example corresponds to a building block of a Residual Network. The `If` argument is a special argument which is explained in the next section.
## Special Arguments
There exists special structural arguments which can be used in each method. These are `->`, `|` and `If`. `->` and `|` can only be set to positive integers and `If` can only be set to a boolean. The argument `If` does not nothing if it is set to true and removes the layer completely if it is set to false. The other two arguments create a repetition of the method. We will show their effect with examples. Assuming `a` is a method without required arguments, then `a(-> = 3)->` is equal to `a()->a()->a()->`, `a(| = 3)->` is equal to `(a() | a() | a())->` and `a(-> = 3, | = 2->` is equal to `(a()->a()->a() | a()->a()->a())->`.
## Argument Sequences
It is also possible to create a repetition of a method in another way through the use of argument sequences. The following are valid sequences: `[2->5->3]`, `[true|false|false]`, `[2->1|4->4->6]`, `[ |2->3]`, `1->..->5` and `3|..|-2`. All values in these examples could also be replaced by variable names or expressions. The first three are standard sequences and the last two are intervals. An interval can be translated to a standard sequence. The interval `3|..|-2` is equal to `[3|2|1|0|-1|-2]` and `1->..->5` is equal to `[1->2->3->4->5]`.
If a argument is set to a sequence, the method will be repeated for each value in the sequence and the connection between the layers will be the same as it is between the values of the sequence. An argument which has a single value is neutral to the repetition which means that it will be repeated an arbitrary number of times without interfering with the repetition. If a method contains multiple argument sequences, CNNArch will try to combine the sequences. The language will throw an error at compile time if this fails. Assuming the method `m(a, b, c)` exists, the line `m(a=[5->3], b=[3|4|2], c=2)->` is equal to:
```
(
m(a=5, b=3, c=2) ->
m(a=3, b=3, c=2)
|
m(a=5, b=4, c=2) ->
m(a=3, b=4, c=2)
|
m(a=5, b=2, c=2) ->
m(a=3, b=2, c=2)
) ->
```
However, the line `m(a=[5->3], b=[2|4->6], c=2)->` would throw an error because it is not possible to expand *a* such that it is the same size as *b*.
## Expressions
Currently, the working expression operators are the basic arithmetic operators "+", "-", "\*", "/", the logical operators "&&", "||" and for most cases the comparison operators "==", "!=", "<", ">", "<=", ">=". The comparison operators do not work reliably for the comparison of tuple (they only compare the last element in the tuples).
## Another Example
This version of Alexnet, which uses method construction, argument sequences and special arguments, is identical to the one in the section Basic Structure.
```
architecture Alexnet{
def input Z(0:255)^{h=224,w=224,c=3} image
def output Q(0:1)^{classes=10} predictions
def conv(filter, channels, convStride=1, poolStride=1, hasLrn=false, convPadding="same"){
Convolution(kernel=(filter,filter), channels=channels, stride=(convStride,convStride), padding=convPadding) ->
Lrn(nsize=5, alpha=0.0001, beta=0.75, If=hasLrn) ->
MaxPooling(kernel=(3,3), stride=(poolStride,poolStride), padding="no_loss", If=(poolStride != 1)) ->
Relu()
}
def split1(i){
[i] ->
conv(filter=5, channels=128, poolStride=2, hasLrn=true)
}
def split2(i){
[i] ->
conv(filter=3, channels=192) ->
conv(filter=3, channels=128, poolStride=2)
}
def fc(){
FullyConnected(units=4096) ->
Relu() ->
Dropout()
}
image ->
conv(filter=11, channels=96, convStride=4, poolStride=2, hasLrn=true, convPadding="no_loss") ->
Split(n=2) ->
split1(i=[0|1]) ->
Concatenate() ->
conv(filter=3, channels=384) ->
Split(n=2) ->
split2(i=[0|1]) ->
Concatenate() ->
fc(->=2) ->
FullyConnected(units=classes) ->
Softmax() ->
predictions
}
```
## Predefined Layers
All methods with the exception of *Concatenate*, *Add*, *Get* and *Split* can only handle 1 input stream and have 1 output stream. All predefined methods start with a capital letter and all constructed methods have to start with a lowercase letter.
* **FullyConnected(units, no_bias=false)**
Creates a fully connected layer and applies flatten to the input if necessary.
* **units** (integer > 0, required): number of neural units in the output.
* **no_bias** (boolean, optional, default=false): Whether to disable the bias parameter.
* **Convolution(kernel, channels, stride=(1,1), padding="same", no_bias=false)**
Creates a convolutional layer. Currently, only 2D convolutions are allowed
* **kernel** (integer tuple > 0, required): convolution kernel size: (height, width).
* **channels** (integer > 0, required): number of convolution filters and number of output channels.
* **stride** (integer tuple > 0, optional, default=(1,1)): convolution stride: (height, width).
* **padding** (String, optional, default="same"): One of "valid", "same" or "no_loss". "valid" means no padding. "same" results in padding the input such that the output has the same length as the original input divided by the stride (rounded up). "no_loss" results in minimal padding such that each input is used by at least one filter (identical to "valid" if *stride* equals 1).
* **no_bias** (boolean, optional, default=false): Whether to disable the bias parameter.
* **Softmax()**
Applies softmax activation function to the input.
* **Tanh()**
Applies tanh activation function to the input.
* **Sigmoid()**
Applies sigmoid activation function to the input.
* **Relu()**
Applies relu activation function to the input.
* **Flatten()**
Reshapes the input such that height and width are 1. Usually not necessary because the FullyConnected layer applies *Flatten* automatically.
* **Dropout()**
Applies dropout operation to input array during training.
* **p** (1 >= float >= 0, optional, default=0.5): Fraction of the input that gets dropped out during training time.
* **MaxPooling(kernel, stride=(1,1), padding="same", global=false)**
Performs max pooling on the input.
* **kernel** (integer tuple > 0, required): convolution kernel size: (height, width). Is not required if *global* is true.
* **stride** (integer tuple > 0, optional, default=(1,1)): convolution stride: (height, width).
* **padding** (String, optional, default="same"): One of "valid", "same" or "no_loss". "valid" means no padding. "same" results in padding the input such that the output has the same length as the original input divided by the stride (rounded up). "no_loss" results in minimal padding such that each input is used by at least one filter (identical to "valid" if *stride* equals 1).
* **global** (boolean, optional, default=false): Ignore kernel, stride and padding, do global pooling based on current input feature map.
* **AveragePooling(kernel, stride=(1,1), padding="same", global=false)**
Performs average pooling on the input.
* **kernel** (integer tuple > 0, required): convolution kernel size: (height, width). Is not required if *global* is true.
* **stride** (integer tuple > 0, optional, default=(1,1)): convolution stride: (height, width).
* **padding** (String, optional, default="same"): One of "valid", "same" or "no_loss". "valid" means no padding. "same" results in padding the input such that the output has the same length as the original input divided by the stride (rounded up). "no_loss" results in minimal padding such that each input is used by at least one filter (identical to "valid" if *stride* equals 1).
* **global** (boolean, optional, default=false): Ignore kernel, stride and padding, do global pooling based on current input feature map.
* **Lrn(nsize, knorm=2, alpha=0.0001, beta=0.75)**
Applies local response normalization to the input.
See: [mxnet](https://mxnet.incubator.apache.org/api/python/symbol.html#mxnet.symbol.LRN)
* **nsize** (integer > 0, required): normalization window width in elements.
* **knorm** (float, optional, default=2): The parameter k in the LRN expression.
* **alpha** (float, optional, default=0.0001): The variance scaling parameter *alpha* in the LRN expression.
* **beta** (float, optional, default=0.75): The power parameter *beta* in the LRN expression.
* **BatchNorm(fix_gamma=true)**
Batch normalization.
* **fix_gamma** (boolean, optional, default=true): Fix gamma while training.
* **Concatenate()**
Merges multiple input streams into one output stream by concatenation of channels. The height and width of all inputs must be identical. The number of channels in the output shape is the sum of the number of channels in the shape of the input streams.
* **Add()**
Merges multiple input streams into one output stream by adding them element-wise together. The height, width and the number of channels of all inputs must be identical. The output shape is identical to each input shape.
* **Get(index)**
`Get(index=i)` can be abbreviated with `[i]`. Selects one out of multiple input streams. The single output stream is identical to the selected input.
* **index** (integer >= 0, required): The zero-based index of the selected input.
* **Split(n)**
Opposite of *Concatenate*. Handles a single input stream and splits it into *n* output streams. The output streams have the same height and width as the input stream and a number channels which is in general `input_channels / n`. The last output stream will have a higher number of channels than the other if `input_channels` is not divisible by `n`.
* **n** (integer > 0, required): The number of output streams. Cannot be higher than the number of input channels.
<br><br>
\ No newline at end of file
......@@ -20,7 +20,7 @@
*/
package de.monticore.lang.monticar.cnnarch._ast;
import de.monticore.lang.monticar.cnnarch.helper.PredefinedVariables;
import de.monticore.lang.monticar.cnnarch.predefined.AllPredefinedVariables;
public class ASTPredefinedArgument extends ASTPredefinedArgumentTOP {
......@@ -41,7 +41,7 @@ public class ASTPredefinedArgument extends ASTPredefinedArgumentTOP {
public void setParallel(String parallel) {
super.setParallel(parallel);
if (parallel != null && !parallel.isEmpty()){
setName(PredefinedVariables.CARDINALITY_NAME);
setName(AllPredefinedVariables.CARDINALITY_NAME);
}
}
......@@ -49,7 +49,7 @@ public class ASTPredefinedArgument extends ASTPredefinedArgumentTOP {
public void setSerial(String serial) {
super.setSerial(serial);
if (serial != null && !serial.isEmpty()) {
setName(PredefinedVariables.FOR_NAME);
setName(AllPredefinedVariables.FOR_NAME);
}
}
}
......@@ -27,6 +27,7 @@ public class CNNArchPreResolveCocos {
return new CNNArchCoCoChecker()
.addCoCo(new CheckMethodLayer())
.addCoCo(new CheckVariable())
.addCoCo(new CheckIODeclaration())
.addCoCo(new CheckIOLayer())
.addCoCo(new CheckArgument())
.addCoCo(new CheckMethodDeclaration());
......
/**
*
* ******************************************************************************
* MontiCAR Modeling Family, www.se-rwth.de
* Copyright (c) 2017, Software Engineering Group at RWTH Aachen,
* All rights reserved.
*
* This project is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 3.0 of the License, or (at your option) any later version.
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this project. If not, see <http://www.gnu.org/licenses/>.
* *******************************************************************************
*/
package de.monticore.lang.monticar.cnnarch._cocos;
import de.monticore.lang.monticar.cnnarch._ast.ASTIODeclaration;
public class CheckIODeclaration implements CNNArchASTIODeclarationCoCo {
@Override
public void check(ASTIODeclaration node) {
//todo: check io shape; only 1 and 3 is allowed
}
}
......@@ -21,10 +21,12 @@
package de.monticore.lang.monticar.cnnarch._cocos;
import de.monticore.lang.monticar.cnnarch._ast.ASTArchitecture;
import de.monticore.lang.monticar.cnnarch._symboltable.ArchitectureSymbol;
public class CheckLayerInputs implements CNNArchASTArchitectureCoCo {
@Override
public void check(ASTArchitecture node) {
//todo:
ArchitectureSymbol architecture = (ArchitectureSymbol) node.getSymbol().get();
architecture.getBody().checkInputAndOutput();
}
}
......@@ -26,7 +26,7 @@ import de.monticore.lang.monticar.cnnarch._symboltable.MethodDeclarationSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.MethodLayerSymbol;
import de.monticore.lang.monticar.cnnarch._symboltable.VariableSymbol;
import de.monticore.lang.monticar.cnnarch.helper.ErrorCodes;
import de.monticore.lang.monticar.cnnarch.helper.PredefinedMethods;
import de.monticore.lang.monticar.cnnarch.predefined.AllPredefinedMethods;
import de.se_rwth.commons.logging.Log;
import java.util.HashSet;
......@@ -64,8 +64,8 @@ public class CheckMethodLayer implements CNNArchASTMethodLayerCoCo{
}
for (ASTArgument argument : node.getArguments()){
requiredArguments.remove(argument.getName());
if (argument.getName().equals(PredefinedMethods.GLOBAL_NAME)){
requiredArguments.remove(PredefinedMethods.KERNEL_NAME);
if (argument.getName().equals(AllPredefinedMethods.GLOBAL_NAME)){
requiredArguments.remove(AllPredefinedMethods.KERNEL_NAME);
}
}
......
......@@ -23,9 +23,9 @@ package de.monticore.lang.monticar.cnnarch._cocos;
import de.monticore.lang.monticar.cnnarch._ast.ASTParameter;
import de.monticore.lang.monticar.cnnarch._ast.ASTVariable;
import de.monticore.lang.monticar.cnnarch._symboltable.VariableSymbol;
import de.monticore.lang.monticar.cnnarch.helper.Constraints;
import de.monticore.lang.monticar.cnnarch._symboltable.Constraints;
import de.monticore.lang.monticar.cnnarch.helper.ErrorCodes;
import de.monticore.lang.monticar.cnnarch.helper.PredefinedVariables;
import de.monticore.lang.monticar.cnnarch.predefined.AllPredefinedVariables;
import de.monticore.symboltable.Symbol;
import de.se_rwth.commons.logging.Log;
......@@ -54,11 +54,16 @@ public class CheckVariable implements CNNArchASTVariableCoCo {
". All new variable and method names have to start with a lowercase letter. "
, node.get_SourcePositionStart());
}
if (name.equals(PredefinedVariables.TRUE_NAME) || name.equals(PredefinedVariables.FALSE_NAME)){
else if (name.equals(AllPredefinedVariables.TRUE_NAME) || name.equals(AllPredefinedVariables.FALSE_NAME)){
Log.error("0" + ErrorCodes.ILLEGAL_NAME_CODE + " Illegal name: " + name +
". No variable can be named 'true' or 'false'"
, node.get_SourcePositionStart());
}
else if (name.equals(AllPredefinedVariables.IF_NAME.toLowerCase())){
Log.error("0" + ErrorCodes.ILLEGAL_NAME_CODE + " Illegal name: " + name +
". No variable can be named 'if'"
, node.get_SourcePositionStart());
}
}
private void checkForDuplicates(ASTVariable node){
......
......@@ -75,6 +75,13 @@ public class ArchitectureSymbol extends ArchitectureSymbolTOP {
}
}
public List<LayerSymbol> getFirstLayers(){
if (!getBody().isResolved()){
resolve();
}
return getBody().getFirstAtomicLayers();
}
public boolean isResolved(){
return getBody().isResolved();
}
......
......@@ -20,8 +20,7 @@
*/
package de.monticore.lang.monticar.cnnarch._symboltable;
import de.monticore.lang.monticar.cnnarch.helper.Constraints;
import de.monticore.lang.monticar.cnnarch.helper.PredefinedVariables;
import de.monticore.lang.monticar.cnnarch.predefined.AllPredefinedVariables;
import de.monticore.symboltable.CommonSymbol;
import de.monticore.symboltable.MutableScope;
import de.monticore.symboltable.Symbol;
......@@ -67,7 +66,7 @@ public class ArgumentSymbol extends CommonSymbol {
}
protected void setRhs(ArchExpressionSymbol rhs) {
if (getName().equals(PredefinedVariables.FOR_NAME)
if (getName().equals(AllPredefinedVariables.FOR_NAME)
&& rhs instanceof ArchSimpleExpressionSymbol
&& (!rhs.getValue().isPresent() || !rhs.getValue().get().equals(1))){
this.rhs = ArchRangeExpressionSymbol.of(
......@@ -75,7 +74,7 @@ public class ArgumentSymbol extends CommonSymbol {
(ArchSimpleExpressionSymbol) rhs,
false);
}
else if (getName().equals(PredefinedVariables.CARDINALITY_NAME)
else if (getName().equals(AllPredefinedVariables.CARDINALITY_NAME)
&& rhs instanceof ArchSimpleExpressionSymbol
&& (!rhs.getValue().isPresent() || !rhs.getValue().get().equals(1))) {
this.rhs = ArchRangeExpressionSymbol.of(
......@@ -121,7 +120,7 @@ public class ArgumentSymbol extends CommonSymbol {
List<ArgumentSymbol> serialArgumentList = new ArrayList<>(serialElementList.size());
for (ArchSimpleExpressionSymbol element : serialElementList){
ArchSimpleExpressionSymbol value = element;
if (getName().equals(PredefinedVariables.FOR_NAME) || getName().equals(PredefinedVariables.CARDINALITY_NAME)){
if (getName().equals(AllPredefinedVariables.FOR_NAME) || getName().equals(AllPredefinedVariables.CARDINALITY_NAME)){
value = ArchSimpleExpressionSymbol.of(1);
}
......
......@@ -22,19 +22,14 @@ package de.monticore.lang.monticar.cnnarch._symboltable;
import de.monticore.lang.math.math._ast.ASTMathExpression;
import de.monticore.lang.math.math._ast.ASTMathFalseExpression;
import de.monticore.lang.math.math._ast.ASTMathTrueExpression;
import de.monticore.lang.math.math._symboltable.MathSymbolTableCreator;
import de.monticore.lang.math.math._symboltable.expression.MathExpressionSymbol;
import de.monticore.lang.math.math._symboltable.expression.MathNameExpressionSymbol;
import de.monticore.lang.math.math._visitor.MathVisitor;
import de.monticore.lang.monticar.cnnarch._ast.*;
import de.monticore.lang.monticar.cnnarch._visitor.CNNArchInheritanceVisitor;
import de.monticore.lang.monticar.cnnarch._visitor.CNNArchVisitor;
import de.monticore.lang.monticar.cnnarch._visitor.CommonCNNArchDelegatorVisitor;
import de.monticore.lang.monticar.cnnarch.helper.Constraints;
import de.monticore.lang.monticar.cnnarch.helper.PredefinedMethods;
import de.monticore.lang.monticar.cnnarch.helper.PredefinedVariables;
import de.monticore.lang.monticar.cnnarch.predefined.AllPredefinedMethods;
import de.monticore.lang.monticar.cnnarch.predefined.AllPredefinedVariables;
import de.monticore.lang.monticar.types2._ast.ASTType;
import de.monticore.symboltable.*;
import de.se_rwth.commons.logging.Log;
......@@ -140,12 +135,12 @@ public class CNNArchSymbolTableCreator extends de.monticore.symboltable.CommonSy
}
private void createPredefinedConstants(){
addToScope(PredefinedVariables.createTrueConstant());
addToScope(PredefinedVariables.createFalseConstant());
addToScope(AllPredefinedVariables.createTrueConstant());
addToScope(AllPredefinedVariables.createFalseConstant());
}
private void createPredefinedMethods(){
for (MethodDeclarationSymbol sym : PredefinedMethods.createList()){
for (MethodDeclarationSymbol sym : AllPredefinedMethods.createList()){
addToScope(sym);
}
}
......@@ -413,11 +408,13 @@ public class CNNArchSymbolTableCreator extends de.monticore.symboltable.CommonSy
public void visit(ASTIOLayer node) {
Optional<IODeclarationSymbol> optIODef = currentScope().get().resolve(node.getName(), IODeclarationSymbol.KIND);
int arrayLength = 1;
boolean isInput = false;
if (optIODef.isPresent()){
arrayLength = optIODef.get().getArrayLength();
isInput = optIODef.get().isInput();
}
if (!node.getIndex().isPresent() && arrayLength > 1){
if (!node.getIndex().isPresent() && arrayLength > 1 && isInput){
List<LayerSymbol> ioLayers = new ArrayList<>(arrayLength);
IOLayerSymbol ioLayer;
for (int i = 0; i < arrayLength; i++){
......@@ -455,7 +452,7 @@ public class CNNArchSymbolTableCreator extends de.monticore.symboltable.CommonSy
@Override
public void visit(ASTArrayAccessLayer node) {
MethodLayerSymbol methodLayer = new MethodLayerSymbol(PredefinedMethods.GET_NAME);
MethodLayerSymbol methodLayer = new MethodLayerSymbol(AllPredefinedMethods.GET_NAME);
addToScopeAndLinkWithNode(methodLayer, node);
}
......
......@@ -20,8 +20,10 @@
*/
package de.monticore.lang.monticar.cnnarch._symboltable;
import de.monticore.lang.monticar.cnnarch.helper.ErrorCodes;
import de.monticore.symboltable.MutableScope;
import de.monticore.symboltable.Symbol;
import de.se_rwth.commons.logging.Log;
import java.util.*;
......@@ -51,17 +53,26 @@ public class CompositeLayerSymbol extends LayerSymbol {
for (LayerSymbol current : layers){
if (previous != null && !isParallel()){
current.setInputLayer(previous);
previous.setOutputLayer(current);
}
else {
if (getInputLayer().isPresent()){
current.setInputLayer(getInputLayer().get());
}
if (getOutputLayer().isPresent()){
current.setOutputLayer(getOutputLayer().get());
}
}
previous = current;
}
this.layers = layers;
}