Skip to content
Snippets Groups Projects
Commit 93ac6803 authored by Simon Klüttermann's avatar Simon Klüttermann
Browse files

q

parent ebb9ff1a
No related branches found
No related tags found
No related merge requests found
Showing
with 38 additions and 27 deletions
......@@ -62,13 +62,13 @@ training for 1000 epochs, and then until the loss increases for 250 epochs, resu
After seeing what an effect some kind of Normation can have, we are not completely satisfied anymore with the normated Feature Maps:
<i f="aucmap928" wmode="True">ABE for better norm (just a copy)</i>
consider the highest #p_T# Value (the lower rigth corner???????). While beeing the generally most interresting Particle, there is no Classification Power in it, and by looking at its distribution it immediatly becomes clear why
<i f="none" wmode="True">better norm pt0 dist</i>
<i f="pt0draw928" wmode="True">(928/drawp0.py)better norm pt0 dist</i>
The Values are basically constant, so its Output has the same reconstruction as the flag Values (first collumn), from which I dont expect any physically useful Information.
So lets solve this: Since #p_T# has always the same structure<note To be more precise, the difference between the first and the second Particle is higher than the difference between the last two ones>, Each Value gets divided by the first One, resulting in it having always the same Value. I solve this by replacing the definition of #n# to be:
##Eq(n,2*z/(max(abs(z))+mean(abs(z))))##
removing the need to set one Value to either positive or negative one, and thus making the highest Value in #p_T# actually useful, and as you see, this works
<i f="none" wmode="True">ABE good norm</i>
<i f="aucmap534" wmode="True">ABE good norm</i>
But, as you see, now the whole Classificatiuon Power lies in flag, and this should be quite confusing to you: Something having no physical Meaning beeing more useful than everything else.
This I will explain in Chapter <ref oneoff>.
......
......@@ -4,7 +4,7 @@ HAVE I ALREADE WRITTEN THIS?
These Normated Networks do result in way worse AUC scores, as expected by removing the probably most interresting Feature. Initial Tries result in AUC scores around #0.55#, but interrestingly they can be invertible. This is not as easy as just switching Background for Signal Data, as the old low compression size of #5# of #12+flag# does not allow for any useful reconstruction (as would have been expected, since the Networks that dont reconstruct angles, obviously require less compression size), but from a compression size of #9# of #12+flag# Features seem to be the first compression size that allows for invertible networks
<i f="none" wmode="True">Invertibility as function of compression, showing also in accuracy</i>
<i f="totalcomp0" wmode="True">(c3p/totalroccomp.py)Invertibility as function of compression, showing also in accuracy</i>
Also you see, that the results are way less constant, this can be solved, by requiring a more extensive training procedure
SEE ABOVE
......
<subsection table="Different Algorithms" label="mixedalt">
Here I test different One Class Algorithms on one Autoencoder that trains on the first 4 Particles of top jets and tries to find qcd jets as a signal. For comparison, simply looking at the loss of this Network, you get an AUC score of about #0.377# and the best AUC score I have seen for an Autoencoder is about #0.25#. I choose this one, since its reconstruction seems to be quite accurate, meaning that most information about the Jet is still contained in the Feature Space
<i f="none" wmode="True">677 simpledraw</i>
<i f="simpledraw677" wmode="True">677 simpledraw</i>
<subsubsection title="SVM" label="mixedSVM">
A SVM works better on the compressed Space, now reaching an AUC of #0.434#, but even though this is definitely an improvement, it is still worse than just using the Autoencoder
......@@ -18,6 +18,6 @@ K neirest neighbour is the first algorithm that improves over simply using the A
Oneoffs seem to be the way to go here, and will be used exclusively for the rest of this Chapter: One Network reached about #0.247# with an error of #0.005#, already beating all my autoencoder, and by combining multiple ones, you can reach an AUC of about #0.2# beeing quite good.
<i f="none" wmode="True">comparison of all one class algos</i>
<i f="none" wmode="True">(will be written later)comparison of all one class algos</i>
......@@ -3,9 +3,9 @@
The easiest model for understanding the OneOff width is something like #sqrt(abs(x-mean(x))**2+std(x)**2)#. And while the means usually match the training data, the standart deviation can be of any size. So training on a dataset and comparing it to another dataset with the same mean and less width, results in a noninvertible network. This is nothing we can do anything about, and an effect that is the same when we talk about Autoencoder classifier, and is even less probable here, as we try to minimize the width, making it less probable that there is a distribution with lower width. Still this can happen, and you see the frequency in Chapter <ref cross>, but here we want to mention one effect, that is even worse: antiinvertibility: if trained on a, b has lower loss, and if trained on b, a has lower loss. This is an ultra rare effect, as we have only observed it once (or multiple times if you think of the statistical invertibility of Chapter <ref ldm>), and an effect that cannot happen just with an autoencoder, so how does this happen? In general, an oneoff network should not be able to do this, as if one feature has a certain width in a, and a lower feature in b, you should be able to pick the same feature in b resulting in b finding a more complicated, except for the case in which there is another feature in b with lower width, but this would also mean, that the width of the second feature in a would be bigger than of the first feature, since else it would have been choosen, resulting again in b finding a more complicated. Here in math: given #LessThan(f_1**b,f_1**a)# the network is not antiinvertible unless #LessThan(f_2**b,f_1**b)#, but since #LessThan(f_1**a,f_2**a)# it has also to be true that #LessThan(f_2**b,f_2**a)# so no Network can be antiinvertible<note we simplify here a tiny bit, since you could mix two features, but this does not chance the math>. That beeing said, since we use Autoencoder in the front, it can happen, that a Feature of the first Autoencoder just does not exist in the second one, thus breaking the logical chain, and making Antiinvertible Networks possible. We only ever saw a single Event doing this, and it was a Network working on ldm data (see Chapter <ref ldm>). Ldm data is hard to differentiate at best, making noninvertible Networks much more likely (in fact, as seen in Chapter <ref cross>, all noninvertible OneOffs in this Thesis are trained on ldm data), and by trying to scale using dense networks, at a node number of 25 we got a antiinvertible network
<i f="none" wmode="True">antiinv double roc for ldm</i>
<i f="drantiinv25" f2="dsantiinv25" wmode="True">(doubleroc/sep)antiinv double roc for ldm</i>
Interrestingly is the seperation quality here much better (even if reversed), as we will be able to do in Chapter <ref ldm>. This implies that this difference is basically just the Jet Size, as the Number of nodes show a difference between both ldm datapoints
<i f="none" wmode="True">ldm size hist</i>
<i f="nhistldm" wmode="True">ldm size hist</i>
and a Network with less nodes (we tried 9 and 16, sizes that dont show many zero particles <note zero padded particles to be precise, for more information see Chapter <ref data>>) does not show any real difference between both datasets. (ENTER 36 PARTICLES)
......@@ -3,12 +3,12 @@
This set of datapoints is generated by Thorben Finke (ENTER LINK?) and consists out of Jets of transverse momentum between #150*GeV# and #270*Gev# of either qcd jets, or those initiated by a dark matter candidate sugested in (ENTER REFERENCE).
This dataset implies a unsupervised classification task that is way more difficult than the usual top tagging, and as we will see, even more complicated than the other datasets that we test our algorithm on here. One reason for this migth be, that the usual top tagging dataset is a very clean, and thus easy to train on, but this does not explain, why even other datasets, that are not cleaned up at all are also easier to differentiate. In fact, we thougth multiple times, that there is just a mislabeling in the events, or some other error in the data generation. That beeing said, even though we cannot ever exclude this possibility, we dont think it is the case, and even if it would be the case, this would be the same complex task of differentiating neirly same datapoints, and a chance at showing of the benefits of our algorithm.
So what makes this dark matter candidate so hard to find? The first difference lies in the angular distribution: while you can use this distribution to differentiate top jets from their qcd counterparts alone, and this quite well (see Chapters <ref scale> and <ref secinv>), here both angular distributions are basically the same
<i f="none" wmode="True">angular distribution of ldm jets(i guess newdata/imgs und both (alt)angular)</i>
<i f="angulardistLDM" f2="altangulardistLDM" wmode="True">angular distribution of ldm jets(i guess newdata/imgs und both (alt)angular)</i>
and the momentum distribution is not much better
<i f="none" wmode="True"> momentum dist ldm</i>
<i f="pthistLDM" wmode="True">(newdata3/histpt) momentum dist ldm</i>
That beeing said, there is one easily understandable parameter that can be used to differentiate both datasets: The Number of Particles in the Jet
<i f="none" wmode="True"> number size dist ldm (i guess nedata3/histn.py)</i>
<i f="nhistldm" wmode="True"> number size dist ldm (i guess nedata3/histn.py)</i>
Sadly, this Parameter is not very useful because of three reasons
<list>
......@@ -29,13 +29,14 @@ This means, that we train 4 node networks, that hopefully find some sense of sub
</list>
This means, that there is a sligth preference of OneOff Networks, to choose easier Features<note In general, this is actually a really good thing: Not only does is this statistically useful, but this also means, that OneOffs automatically prevent themself from overfitting (at least to a degree), and thus are generally more general>, which sadly means, that this is a really hard test for them, and the only thing that we want here is invertibility: A Network, trained on ligth qcd jets, that thinks ldm jets are more complicated and the inverse.
<i f="none" wmode="True">ldm invertibility (multisep.py)</i>
<i f="multisep10" wmode="True">ldm invertibility (multisep.py)</i>
As you see, this is not at all trivial, but when we consider the loss of the OneOff network, which is drawn on the X axis as quality, the best Networks are actually invertible, and since this is still completely unsupervised, just using the feature quality of a network, we can say that we can safely generate invertible Anomaly detection algorithms on this dataset.
That beeing said, this is obviously not useful at all: half a percentage in Auc does not help you at all differentiating new physics, but it is worth to note, that also top tagging after normation looked very similar to this once (see Chapter (ENTER CHAPTER)), and even though even their, the Classification quality was a lot better, we see no reason, why you could not improve and optimize the Networks to be way better, especcially, since we did not run any hyperparameter optimization (except for the compression size, which is 1 bigger (at 10), to compensate for the more random structure than was used for top tagging), and still only use 4 Particles, which even when using supervised Algorithm is not easy to differentiate
<i f="none" wmode="True">supervised 4 particle ldm</i>
<i f="none" wmode="True">(not yet having a final version)supervised 4 particle ldm</i>
So for us, even this fairly weak seperation is a huge success, especcially, since it shows, that even when the data is basically unseperable, our Algorithm wont be confused by some useless variable, which could not be said about the classical Autoencoder loss
<i f="none" wmode="True">ldm invertibility of recqual.py (wie multisep, aber ohne oo)</i>
ACTUALLY NOT TRUE!
<i f="multiroc10" wmode="True">ldm invertibility of recqual.py (wie multisep, aber ohne oo)</i>
<subsection title="Quark v Gluon" label="qg">
Quark and Gluon data, is generated by Madgraph(ENTER REFERENCE), Pythia(ENTER REFERENCE) and Delphes(ENTER REFERENCE). One set is generated as parton parton to gluon gluon collisions and another as parton parton to two parton without gluon collisions, which jets are used, if there transverse jet momentum is between 550 and 650 GeV. This data was used originally to see if a qcd trained Classifier makes a easily accessible difference between Quarks and Gluons<note you could interpret this, as another form of complexity: while top jets are all the result of top quarks, with qcd jets there are multiple options, we though this could explain why qcd trained encoder are generally worse, but this is not the case, as we will see later in this Chapter>, but even though this is seems not to be the case, we can still use this dataset to test our algorithm a bit further. Again we use 4 Particle Networks, with a compression size of #9# and only neglicible hyperparameter optimization to reach quality of
<i f="none" wmode="True">double roc curve for quark gluon</i>
<i f="drquarkgluon" f2="dsquarkgluon" wmode="True">double roc curve for quark gluon</i>
As you see, these are invertible Networks, and even though they are not very good ones, as described in the previous Chapter <ref ldm> this does not really matter, since optimization has the potential to improve them quite a lot. (ENTER REFERENCE https://arxiv.org/abs/1712.03634) could be seen as a Reference Paper for this Process, even though they use a supervised approach and high level input data on different transverse momentum ranges, their achieves AUC values below #0.9# suggest that this tagging job is more complicated than the usual top tagging. Also Chapter <ref crossdata> supports this Hypothesis
......
<subsection title="leptons" label="leptons">
This final dataset is not very physically useful, and more interresting from an anomaly detection standpoint: We again generate Particle collisions using Madgraph, Pythia and Delphes, but instead of partons colliding into partons, we use leptons colliding into partons. For the first set, we use any combination of electrons and muons with arbitrary charge, and for the second one we only use tau leptons. We also use a fairly big transverse momentum range of #20*Gev# to #5000*Gev# to vary another Parameter.
<i f="none" wmode="True">lepton double roc</i>
<i f="drleptons" f2="dsleptons" wmode="True">lepton double roc</i>
Again you see a clear invertibility, suggesting the studied generality.
......
<subsection title="Even more data" label="crossdata">
At the beginning of this Chapter, we called Anomaly detection the task of finding everything that is not similar to the trained on class. And even though we tried to evaluate this Task, by showing the invertibility on a multitude of datasets, we slowly are out of particles to test it on<note especcially since the initial toptagging dataset already contained the whole of qcd>. That beeing said, one thing we did not yet do, is to mix the datasets. You migth question how useful this is from a physical standpoint, as there will probably never be a situation, in which you want to find leptons, only knowing gluon, but you could say the same about the task of finding qcd jets only knowing dark matter. The point is, that these basically thought experiments introduce thrust in the algorithm used, as Chapter <ref secinv> clearly shows, that invertibility and feature triviality can be linked. Also from a computational standpoint, for a qcd trained Network, also tau generated jets are different, and since training unsupervised does even mean, that we dont have to train new anomaly detection models, there is no reason not to compare those Jets
<i f="none" wmode="True"> cross invertibility (crossauc)</i>
<i f="crosssep" wmode="True"> cross invertibility (crosssep)</i>
As you see, there are only few spots that are not invertible (we chanced the meaning of each AUC value, in such a way, that each slot should be deeply blue in the best case), for simplicity we mark them with black spots, but there are also a lot of values that are simply not visible, so here are them again as a table
......
......@@ -9,13 +9,13 @@ This generated consists of 5000 randomly generated users with 4 attributes (a co
Now for each person in this Network, we only look at the local surrounding of this person. This is done, by taking only the connection of the friends, or friends of friends of this person into account. This generates a lot smaller graphs, that we could feed into the Autoencoder<note you could ask yourself if this reusing of nodes does not result in a lot of overfitting, but as you see below, that is not the case, possibly because of the low number of parameters in the graph autoencoder>, but for simplicity we cut a bit on the size of those new graphs, as we allow for at most 70 nodes<note this keeps #0.9932# of all data points>.
<subsubsection title="Training" label="netstrain">
<i f="none" wmode="True">network plot for nets</i>
<i f="visnets" wmode="True">network plot for nets</i>
To train this Network, we use a fairly simple setup, compressing the 70 nodes once by a factor 5, resulting in 14 nodes with 12 informations each.
As you see in the training curve, the loss is basically the same for trainings and validation loss, and contains steps, as also seen before<note for example in Chapter <ref secondworking>>.
Much more interrestingly, is the loss distribution
<i f="none" wmode="True">loss distribution for nets</i>
<i f="histlossNETS" wmode="True">loss distribution for nets</i>
As you see, the reconstruction is not very good, as basically all events have a nonzero loss, but maybe even more interrestingly: there is some difference in the reconstruction of out anomal datapoints. This migth seems like you could use this to seperate datapoints, but there is a difficulty: If you use an autoencoder to seperate datasets, you assume, that a dataset, that the Network never saw, will be reconstructed worse, than a dataset that is trained on, but here the opposite is the case: the data that is abnormal is easier reconstructed, so any seperation is a bit weird: sure you could just look at something like #1-loss#, but you do not have any reasoning for this? Maybe, and even probably, only this kind of abnormal data will be reconstructed easier<note this is here probably the case, sine the abnormal data is less complicated, as it contains less nodes>, and by negating the loss, you would not get any useful seperation on other datapoints. So what can we do? Your first instinkt migth say, to just look at this distribution, and seperate accordingly: define everything as weird on the side that you need, but this would no longer be unsupervised, as you require information over the alternative datapoints, so as alternative: use OneOff networks: In their easiest version, they take the mean of the training peak, and define distance as difference to this peak, which would already solve this problem, and in their deep implementation they migth even improve this further. Anyhow why, this works quite well
<i f="none" wmode="True">oo dist for nets</i>
<i f="oohistNETS" wmode="True">oo dist for nets</i>
Please note that we dispence using any number here, measuring how good the reconstruction is, as we could improve it arbitrarily by chancing the data generation
<subsubsection title="Whats Next" label="netsnext">
......
......@@ -10,9 +10,9 @@ Every Atom is given by 3 spacial coordinates, that give the position relative to
Those datapoints, get filtered a lot, since fewer events do not really matter as much faulty ones, if overfitting is not a problem. First, we only allow molecules constructed entirely from the Atoms H, C, O and N. Then we allow at most 50, and at least 25 molecules, of which between 10 and 25 are to be Hydrogen, and check if the downloaded file is consistent<note The molecular formula matches the distribution of atoms>.
<subsubsection title="Training" label="moltrain">
<i f="none" wmode="True">modelsetup(s) mol</i>
<i f="visMOL" wmode="True">modelsetup(s) mol</i>
We again use a fairly simple setup, consisting only out of a handful of graph update layer, and possibly a graph compression layer, comparing its effect
<i f="none" wmode="True">training results mol 5/6</i>
<i f="historyMOL" f2="historyMOL2" wmode="True">training results mol 5/6</i>
There are two things to note here: first, since the mass can reach order of magnitude of #1000*g/mol#, and since the difference is squared, this can reach very high loss values at the beginning of the training. This is also, why you see orders of magnitude of chance in the loss function. As you also see, the chance in loss is different between the compressing Network, and the noncompressed version: As both Networks require similar times to calculate an Training Epoch, the compressed Version requires more than 100 Epochs less to reach a similar result. That beeing said, therefore the noncompressed version reaches a sligthly lower minimal loss of #78# in comparison to #73#, even though it should be noted, that this difference is tiny compared to initial losses, and the compressing version has a bunch more parameters that could be tweaked to chance this.
<subsubsection title="Whats next?" label="molnext">
......
......@@ -8,26 +8,27 @@ The corresponding Tutorial can be found here (ENTER LINK) and the full code is f
<subsubsection title="Data generation" label="feyndata">
Datageneration for Feynman Diagrams means more converting Data, instead of outrigth generating them. The problem is, that all Diagrams that you find, are usually given as Images, and writing an Program to read every Image into a Diagram is absolutely nontrivial, which is why we just converted those diagramms by hand<note you could actually use a graph neuronal network for this, build similar to the one from the next chapter <ref build>>. You could actually ask yourself, it writing an image like autoencoder to work on those images would not be much less work. And even though we would agree, we think this would also work way worse, as you could not differentiate between an Image that just looks like a feynman diagram, and an Image that actually represents some physical insigth. If everything looks like a feynman diagramm, you can easily use the loss to differentiate those two cases, since a chance in loss now definitely represents a better reconstruction in the autoencoder we will train. Also by training on Images you could again more probably see overfitting, resulting in higher needed training samples, that we dont have.
We use all diagrams from (ENTER REFERENCE)<note These diagramms are of relatively low order>, that match our filter of only SM diagrams and at most 9 lines, and represent each diagram in the following way: Each line becomes a node, and each two lines that meet in an edge, are connected. This migth seem counterintuitive at first, as we basically switch nodes and edges, but is actually neccesary, since each edge requires two nodes, and in a usual diagram, input aswell as output lines, only have one edge. Then each line(node) is represented by a 14dimensional vector, onehot encoding the particle type (gluon,quark,lepton,muon,Higgs, W Boson, Z Boson, photon, proton,jet), 3 special boolean values encoding anti particles<note for simplicity this variable is always zero for lines that are neither input nor output>, input lines, output lines and a fourtheenth value that is always 1 (flag).
<i f="none" wmode="True"> Example Image of the conversion </i>
<i f="conv01" f2="conv02" wmode="True"> Example Image of the conversion </i>
<subsubsection title="Training" label="feyntrain">
<i f="visfeyn" wmode="True">model plot feyn</i>
Also here a fairly easy setup is used, but instead of the compression algorithm, we use the abstraction one, and the paramlike deconstruction algorithm replaces the classical one, to encode the abstraction of a factor 3 (reducing 9 nodes into 3). Therefore we add 3 Parameters, as well as a couple more graph update steps. One thing that migth be important later, is that we dont punish the resulting graph structure directly, even though the paramlike decomporession algorithm should make this possible, but only indirectly since a nonsencical graph structure will worsen the quality of the update step.
<i f="historyfeyn" wmode="True">training plot for feynnp</i>
As you see, the training curve improves after the initial plateau first quite drastically, just to slow down later, and reach a validation loss below #0.05# at the end, which we are fairly happy with, since this means, that converted to booleans, only about 1 in 20 Values is wrong<note since the results are not booleans, this is only true for the average>. More interrestingly, you also see, that the validation loss is consistently lower than the training loss, which means, that at least also this Network does not overfit and we thus migth be able to use it.
<i f="none" wmode="True">lossbyn</i>
<i f="lossbynFEYN" wmode="True">lossbyn</i>
One Problem, that was prevalent for example in Chapters <ref scale> and <ref nets> is that complexity defines the loss at least as much as accuracy. This means, that when you have multiple levels of complexity in your training data, you migth not be able to diffeentiate more complex from more abnormal datapoints. So, since complexity for feynman graphs instinktively correlates to the number of lines, looking at the loss as a function of the number of lines is interresting: as you see, a bigger loss is generally more probable for higher line counts, but the relation is not so strong, as the difference is marginal. This suggests, that the complexity the network sees, is not the same, as the complexity we see, and thus the network migth return more valuable information.
Finally, to see this valuable information, we have to compare the loss to the loss of other diagramms. We first though of looking at non SM diagramms, but different particles require different encoding, so we try 4 diagramms, which contribution vanishes:
<i f="van01" f2="van02" f3="van03" f4="van04" wmode="True">vanishing diagramms (using (ENTER REFERENCE https://feynman.aivazis.com/)</i>
<i f="van01" f2="van02" f3="van03" wmode="True">vanishing diagramms (using (ENTER REFERENCE https://feynman.aivazis.com/)</i>
<i f="van04" f2="van05" f3="van06" wmode="True">more vanishing diagramms</i>
As you see, the first diagramm is zero because of C parity<note see (ENTER REFERENCE)>, the second and the third because 5 line vertices are not possible and the fourth violates all kind of conservation laws.
<i f="none" wmode="True">loss distribution for feynnp</i>
<i f="histlossFEYN" wmode="True">loss distribution for feynnp</i>
if we plot the loss, for those diagramms into the loss distribution, you see some loss difference: the loss of those diagramms is generally bigger, and the lowest loss, that actually definitely overlapps the training loss, is achieved by the least complex diagram.
<subsubsection title="Whats Next?" label="feynnext">
These four diagramms migth seem like this method works, but at the end, this are only 4 diagramms. So looking at more alternative diagramms is definitely a good idea. This migth result in some diagramms that vanish, reaching fairly low losses, but migth allow you to understand what the network considers complexity. In this note<note from 42 diagramms with 5 notes LL601000002 has the lowest loss, while LL90000001 has the highest, for the 12 diagramms with 6 lines these are LL90000009 and LL90000005, for 44 of size 7 you find LL10200011 and LL21000001 and for the 21 graphs of size 8 the extremal cases are LL51100101 and LL52200005> you find the diagramms that have extremal losses. And here are two other things we noticed: Not every graph that is reconstructed actually exist. Through the original conversion, there are diagramms that could not be translated back to feynman diagrams
<i f="none" wmode="True">a failing diagramm from feynnp</i>
<i f="none" wmode="True">(NOT YET DONE)a failing diagramm from feynnp</i>
this migth suggest, that weigthing the adjacency matrix directly would be a good idea. you migth also want to take a look at permutatation invariant losses (see Chapter <ref losses>). Secondly, most diagramms have two Inputs, and the network is fairly good at reconstructing this
<i f="none" wmode="True">input number hist</i>
<i f="histicFEYN" wmode="True">input number hist</i>
As you see, it migth even be a bit to good, as it reconstructs even more 2 input diagramms. (SEARCH FOR ERRORS IN THOSE RECONSTRUCTE DIAS)
Finally reproducability and the applicability of OneOff Networks migth also be interresting here.
......
<subsection title="Graph like generators" label="build">
THIS CHAPTER IS NOT VERY GOOD, I AM WORKING AT REPLACING IT ENTIRELY (SO THIS CHAPTER IS NOT PROOFREAD/HAS NO IMAGES)
As we had an examples for autoencoder, and an example for a supervised graph network, we want to also highligth the possibility of having a graph as output. The example here tries to do architecture, by interpreting connections as walls.
The corresponding Tutorial can be found here (ENTER LINK) and the full code is found here (ENTER LINK)
......
File added
File added
File added
{"title":"An Example Diagram","elements":{"anchors":{"848":{"id":848,"x":400,"y":300},"6554":{"id":6554,"x":250,"y":300},"7226":{"id":7226,"x":500,"y":200},"7278":{"id":7278,"x":500,"y":400},"7488":{"id":7488,"x":150,"y":200},"7903":{"id":7903,"x":150,"y":400}},"propagators":{"375":{"id":375,"kind":"fermion","anchor1":848,"anchor2":7226,"arrow":0},"2046":{"id":2046,"kind":"fermion","anchor1":6554,"anchor2":848,"arrow":0},"3480":{"id":3480,"kind":"fermion","anchor1":7488,"anchor2":6554,"arrow":0},"4686":{"id":4686,"kind":"fermion","anchor1":848,"anchor2":7278,"arrow":0},"6252":{"id":6252,"kind":"fermion","anchor1":7903,"anchor2":6554,"arrow":0}},"text":{},"shapes":{}}}
\ No newline at end of file
imgs/conv01.png

14.9 KiB

{"title":"An Example Diagram","elements":{"anchors":{"565":{"id":565,"x":550,"y":400},"2415":{"id":2415,"x":400,"y":300},"2497":{"id":2497,"x":250,"y":400},"3002":{"id":3002,"x":550,"y":200},"4831":{"id":4831,"x":250,"y":200}},"propagators":{"1516":{"id":1516,"kind":"fermion","anchor1":2497,"anchor2":4831,"arrow":0},"1640":{"id":1640,"kind":"fermion","anchor1":4831,"anchor2":2415,"arrow":0},"2369":{"id":2369,"kind":"fermion","anchor1":2415,"anchor2":3002,"arrow":0},"5192":{"id":5192,"kind":"fermion","anchor1":2415,"anchor2":565,"arrow":0},"5787":{"id":5787,"kind":"fermion","anchor1":2497,"anchor2":2415,"arrow":0},"6962":{"id":6962,"kind":"fermion","anchor1":3002,"anchor2":565,"arrow":0}},"text":{},"shapes":{}}}
\ No newline at end of file
imgs/conv02.png

16.3 KiB

File added
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment