LBN issueshttps://git.rwth-aachen.de/3pia/lbn/-/issues2020-03-24T20:28:26Zhttps://git.rwth-aachen.de/3pia/lbn/-/issues/2Using LBN with TensorFlow22020-03-24T20:28:26ZBen TannenwaldUsing LBN with TensorFlow2[minimalExample.py](/uploads/c4bd8602048b0fdce75d602b541068c4/minimalExample.py)
I'm having an issue using the LBN packagae (`1.0.3`) with tensorflow (`2.0.0`) and Python 3.7.4. In the attached minimal example, I create some four-vectors using the `create_four_vectors` function in the `test.py` file of the repository, generate some random labels, build a simple model, and then attempt to train. The model will compile but fails on the second iteration of training with a message like
`TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: sequential/lbn_layer/LBN/particles/particle_weights:0`
It seems as if something isn't properly connected, but I've been unable to figure out the issue, so any help/insight would be much appreciated. Please let me know if there's any additional information I can provide.[minimalExample.py](/uploads/c4bd8602048b0fdce75d602b541068c4/minimalExample.py)
I'm having an issue using the LBN packagae (`1.0.3`) with tensorflow (`2.0.0`) and Python 3.7.4. In the attached minimal example, I create some four-vectors using the `create_four_vectors` function in the `test.py` file of the repository, generate some random labels, build a simple model, and then attempt to train. The model will compile but fails on the second iteration of training with a message like
`TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: sequential/lbn_layer/LBN/particles/particle_weights:0`
It seems as if something isn't properly connected, but I've been unable to figure out the issue, so any help/insight would be much appreciated. Please let me know if there's any additional information I can provide.https://git.rwth-aachen.de/3pia/lbn/-/issues/1LBNLayer not working when called on multiple tensors2019-08-09T13:01:10ZNiclas Steve EichLBNLayer not working when called on multiple tensorsThe `LBNLayer` class is not working properly, when called on multiple different tensors. This problem can be showed by this minimal example:
(using tensorflow-gpu==1.13.1)
```
import tensorflow as tf
from lbn import LBNLayer
sseq = tf.keras.Sequential()
sseq.add(LBNLayer(n_particles=8))
x_0 = tf.placeholder(tf.float32, (100, 8, 4))
x_1 = tf.placeholder(tf.float32, (100, 8, 4))
y_0 = sseq(x_0)
y_1 = sseq(x_1)
tf.gradients(y_0, x_0)
# [<tf.Tensor 'gradients/sequential/LBN/inputs/split_grad/concat:0' shape=(100, 8, 4) dtype=float32>]
#
tf.gradients(y_1, x_1)
# [None]
# tf.gradients returns None, because there is no connection in the graph from y_1 to x_1
tf.gradients(y_1, x_0)
# [<tf.Tensor 'gradients_2/sequential/LBN/inputs/split_grad/concat:0' shape=(100, 8, 4) dtype=float32>]
# This connection exists
```
Calling the sequential model a second time on the tensor `x_1`, does not connect `x_1` to the graph. This is not the desired behaviour of a keras layer.
I suspect, the reason behind this could be, that the `call` function of `LBNLayer` (which is the call function of lbn), actually registers the given input tensors as attributes of the LBN-object. This way, it is stuck with the initially registered tensors and not able to feed other tensors through.
Edit:
To make the code work on tf.1.13.1, one has to change:
```weight_shape = (n_in, m)```
to
```weight_shape = (n_in.value, m)```
in `lbn.py` line `512`
The `LBNLayer` class is not working properly, when called on multiple different tensors. This problem can be showed by this minimal example:
(using tensorflow-gpu==1.13.1)
```
import tensorflow as tf
from lbn import LBNLayer
sseq = tf.keras.Sequential()
sseq.add(LBNLayer(n_particles=8))
x_0 = tf.placeholder(tf.float32, (100, 8, 4))
x_1 = tf.placeholder(tf.float32, (100, 8, 4))
y_0 = sseq(x_0)
y_1 = sseq(x_1)
tf.gradients(y_0, x_0)
# [<tf.Tensor 'gradients/sequential/LBN/inputs/split_grad/concat:0' shape=(100, 8, 4) dtype=float32>]
#
tf.gradients(y_1, x_1)
# [None]
# tf.gradients returns None, because there is no connection in the graph from y_1 to x_1
tf.gradients(y_1, x_0)
# [<tf.Tensor 'gradients_2/sequential/LBN/inputs/split_grad/concat:0' shape=(100, 8, 4) dtype=float32>]
# This connection exists
```
Calling the sequential model a second time on the tensor `x_1`, does not connect `x_1` to the graph. This is not the desired behaviour of a keras layer.
I suspect, the reason behind this could be, that the `call` function of `LBNLayer` (which is the call function of lbn), actually registers the given input tensors as attributes of the LBN-object. This way, it is stuck with the initially registered tensors and not able to feed other tensors through.
Edit:
To make the code work on tf.1.13.1, one has to change:
```weight_shape = (n_in, m)```
to
```weight_shape = (n_in.value, m)```
in `lbn.py` line `512`