Skip to content
Snippets Groups Projects
Commit 6ead032b authored by Simon Klüttermann's avatar Simon Klüttermann
Browse files

q

parent 506334a1
Branches
No related tags found
No related merge requests found
Showing
with 617 additions and 240 deletions
File moved
File moved
File moved
File moved
File moved
File moved
File moved
File moved
File moved
File moved
File moved
<section title="Other Usecases" label="secuse">
This Chapter is based 4 Graph Autoencoder Applications, that were originally written as Hello World Style Programms for its Documentation<note ENTER LINK>. And, even though there are some quite interresting Insigths about Graph Autoencoder to be found here, this also means, that this Chapter can be skipped, without loosing to much Information. Finally, this also means, that each of those Applications are not optimized in any way, and maybe not even completely though through, since their main Goal is just to be a quick explanaition of the code structure and maybe to be some inspiration, but not to completely work through any Idea.
<subsection title="Fraud Detection for Social Networks" label="nets">
Social Networks provide Data that is naturaly described as Graphs<note Lets Users be nodes, while friendships provide edges>, so by training a network on those, with the hope of finding anormal events, we not only get a new possible usecase for graph autoencoder, but also an example code for a network that does not generate its own graph.
The corresponding Tutorial can be found here (ENTER LINK) and the full code is found here (ENTER LINK)
<subsubsection title="Datageneration" label="netsdata">
Datageneration is often the most timeconsuming Part of new neuronal Network, and it would not be different here. So to save some time, we just generate a sample Network. This allows you to ignore privacy settings<you migth not know everything about every user: you would need to decide how to handle a friend about whom you do not know anything>, simplifies the problem a bit<note since an usual facebook user has a lot of information, and often enough hundrets of friends>, and allows you to clearly define the anomalous data points. That beeing said, this also means, that we could tweak the data in every possible way to make the results arbitrarily good, which is why this is the only subchapter that works with self generated data.
This generated consists of 5000 randomly generated users with 4 attributes (a constant 1 (flag), #a#:an integer between 1 and 3, #b#:an integer either 0 or 1 as well as a normal distributed Value that depends on #a#<note a normal distributed value with mean #0# and standart deviation #1# added to #(2**a)/16# times another normal distribution with mean #1# and standart deviation #0.1#. This is done just to have some relation between the elements>. The correspondign connections are generated the following way: each connection has a probability, that depends on the difference in the person vector<note a factor #exp(-abs(x_i-x_j)**2)#> and on the difference in the node index<note another factor #exp(-0.1*abs(i-j))#>. This means, that more similar persons are connected more closely, and that friends of friends are more probably friends. Now we guess on average 5 connections for each person, with respect to the given probabilities, or 2 for the alternative datapoints. We choose this, since defining less used accounts as anomalies, allows us later to show a benefit of oneoff networks.
Now for each person in this Network, we only look at the local surrounding of this person. This is done, by taking only the connection of the friends, or friends of friends of this person into account. This generates a lot smaller graphs, that we could feed into the Autoencoder<note you could ask yourself if this reusing of nodes does not result in a lot of overfitting, but as you see below, that is not the case, possibly because of the low number of parameters in the graph autoencoder>, but for simplicity we cut a bit on the size of those new graphs, as we allow for at most 70 nodes<note this keeps #0.9932# of all data points>.
<subsubsection title="Training" label="netstrain">
<i f="none" wmode="True">network plot for nets</i>
To train this Network, we use a fairly simple setup, compressing the 70 nodes once by a factor 5, resulting in 14 nodes with 12 informations each.
As you see in the training curve, the loss is basically the same for trainings and validation loss, and contains steps, as also seen before<note for example in Chapter (ENTER CHAPTER)>.
Much more interrestingly, is the loss distribution
<i f="none" wmode="True">loss distribution for nets</i>
As you see, the reconstruction is not very good, as basically all events have a nonzero loss, but maybe even more interrestingly: there is some difference in the reconstruction of out anomal datapoints. This migth seems like you could use this to seperate datapoints, but there is a difficulty: If you use an autoencoder to seperate datasets, you assume, that a dataset, that the Network never saw, will be reconstructed worse, than a dataset that is trained on, but here the opposite is the case: the data that is abnormal is easier reconstructed, so any seperation is a bit weird: sure you could just look at something like #1-loss#, but you do not have any reasoning for this? Maybe, and even probably, only this kind of abnormal data will be reconstructed easier<note this is here probably the case, sine the abnormal data is less complicated, as it contains less nodes>, and by negating the loss, you would not get any useful seperation on other datapoints. So what can we do? Your first instinkt migth say, to just look at this distribution, and seperate accordingly: define everything as weird on the side that you need, but this would no longer be unsupervised, as you require information over the alternative datapoints, so as alternative: use OneOff networks: In their easiest version, they take the mean of the training peak, and define distance as difference to this peak, which would already solve this problem, and in their deep implementation they migth even improve this further. Anyhow why, this works quite well
<i f="none" wmode="True">oo dist for nets</i>
Please note that we dispence using any number here, measuring how good the reconstruction is, as we could improve it arbitrarily by chancing the data generation
<subsubsection title="Whats Next" label="netsnext">
Given this, you migth notice, that it is not though through completely. This is why we include this subchapter to give you some ideas on what could be improved further. The first thing you migth need, is to work with more kinds of anomalous datapoints, and not just neglected profiles. It migth also be a good idea to work on an actual social network, and if you do this, it would be interresting to just look at the users in the training set, that are reconstructed worst, as this would allow you to find abnormal users, that are not yet noticed, even when they are already fairly common.
<subsection title="Accelarating molecular science through pooling" label="mol">
Our second example alternative usecase works on molecules: As they are usually described only by interactions between pairs of atoms, they are well described by graphs. Here we want to use this, to suggest, that the compression step in a graph autoencoder can accelarate a Network trying to learn a function from this molecule.
The corresponding Tutorial can be found here (ENTER LINK) and the full code is found here (ENTER LINK)
<subsubsection title="Datageneration" label="moldata">
All our Datapoints here are random molecules, that come from chemspider.com, mostly since they allow you to easily download a complete description of a molecule, including not only all atoms, but also suggested connections<note you could also generate those connections yourself, but this would require a different algorithm instead of topK, more something that connects everything in a fixed distance (see Appendix (ENTER APPENDIX))>, as well as molecular mass, here given in #g/mol#, which is what use as the Network output.
Every Atom is given by 3 spacial coordinates, that give the position relative to the other atoms in the molecule, and another Attribute detailing the type of molecule<note while writing this: onehot encoding this migth actually be a bad idea>. Every other Information given for each Atom is ignored, similar to information given about edges, except for the Atoms, which are connected.
Those datapoints, get filtered a lot, since fewer events do not really matter as much faulty ones, if overfitting is not a problem. First, we only allow molecules constructed entirely from the Atoms H, C, O and N. Then we allow at most 50, and at least 25 molecules, of which between 10 and 25 are to be Hydrogen, and check if the downloaded file is consistent<note The molecular formula matches the distribution of atoms>.
<subsubsection title="Training" label="moltrain">
<i f="none" wmode="True">modelsetup(s) mol</i>
We again use a fairly simple setup, consisting only out of a handful of graph update layer, and possibly a graph compression layer, comparing its effect
<i f="none" wmode="True">training results mol 5/6</i>
There are two things to note here: first, since the mass can reach order of magnitude of #1000*g/mol#, and since the difference is squared, this can reach very high loss values at the beginning of the training. This is also, why you see orders of magnitude of chance in the loss function. As you also see, the chance in loss is different between the compressing Network, and the noncompressed version: As both Networks require similar times to calculate an Training Epoch, the compressed Version requires more than 100 Epochs less to reach a similar result. That beeing said, therefore the noncompressed version reaches a sligthly lower minimal loss of #78# in comparison to #73#, even though it should be noted, that this difference is tiny compared to initial losses, and the compressing version has a bunch more parameters that could be tweaked to chance this.
<subsubsection title="Whats next?" label="molnext">
We dont want to call using a compression layer to pool graph networks generally a good idea, but if you have a network that takes an unbearable time, trying out inserting a compression layer migth be good idea. It migth also be interresting to optimize the hyperparameters of the compression layer, or even to alter the setup by for example using an abstraction layer. Finally, this is tested on a fairly easy setup, and it migth be possible to run this on a more complicated setup like particlenet.
<subsection title="High level Machine Learning and Feynman Diagramms, or how I learned to stop thinking and love the graphs" label="feyn">
Machine learning is usually only used on low level Data. Inputs that are easily generated but timeconsuming for humans to understand. So why not apply Machine Learning to highly abstracted Concepts? You migth ask why one would want this, we think of an theory evaluation method: If you have a number of predictions, this could classify weirdness in the sense of finding predictions that dont match the rest. In the best case you could also extend theories consistently: you can generate new inputs from existing ones. You could automatically bring structure to your predictions, by looking at the compression space of an autoencoder or you could use this to simplify complicated theories. So why dont we do this? two things come to mind: most predictions are not of vector form, and generating a lot of predictions is quite hard. Luckely both are solved by the graph setup: This graph Structure is way more powerful, to the point that ai often encodes knowledge in graphs (see (ENTER REFERENCE)), and since overfitting has not been a problem at all here, also the low number of training samples should not matter here<note There is a second price you pay, when you train on a few Datapoints: Not only becomes overfitting more probable, but you also loose generality, as density fluctuations of the different kind of training samples (where these types of samples are defined by the training itself, which makes them hard to filter out) start to matter more. Sadly we cannot really chance this to much>
Now consider Feynman Diagramms: As they are able to encode all of Particle Physics in a finite set of graphs, they are at the same time very high level, while also still providing #O(100)# samples, which migth not be a good size for our training set, but should still be workable with, and finding anomalous Feynman diagrams, migth actually be an interresting way to solve the initial Problem of using graphs to find new physics
<subsubsection title="Data generation" label="feyndata">
Datageneration for Feynman Diagrams means more converting Data, instead of outrigth generating them. The problem is, that all Diagrams that you find, are usually given as Images, and writing an Program to read every Image into a Diagram is absolutely nontrivial, which is why we just converted those diagramms by hand<note you could actually use a graph neuronal network for this, build similar to the one from the next chapter (ENTER CHAPTER)>. You could actually ask yourself, it writing an image like autoencoder to work on those images would not be much less work. And even though we would agree, we think this would also work way worse, as you could not differentiate between an Image that just looks like a feynman diagram, and an Image that actually represents some physical insigth. If everything looks like a feynman diagramm, you can easily use the loss to differentiate those two cases, since a chance in loss now definitely represents a better reconstruction in the autoencoder we will train. Also by training on Images you could again more probably see overfitting, resulting in higher needed training samples, that we dont have.
We use all diagrams from (ENTER REFERENCE)<note These diagramms are of relatively low order>, that match our filter of only SM diagrams and at most 9 lines, and represent each diagram in the following way: Each line becomes a node, and each two lines that meet in an edge, are connected. This migth seem counterintuitive at first, as we basically switch nodes and edges, but is actually neccesary, since each edge requires two nodes, and in a usual diagram, input aswell as output lines, only have one edge. Then each line(node) is represented by a 14dimensional vector, onehot encoding the particle type (gluon,quark,lepton,muon,Higgs, W Boson, Z Boson, photon, proton,jet), 3 special boolean values encoding anti particles<note for simplicity this variable is always zero for lines that are neither input nor output>, input lines, output lines and a fourtheenth value that is always 1 (flag).
<i f="none" wmode="True"> Example Image of the conversion </i>
<subsubsection title="Training" label="feyntrain">
<i f="none" wmode="True">model plot feyn</i>
Also here a fairly easy setup is used, but instead of the compression algorithm, we use the abstraction one, and the paramlike deconstruction algorithm replaces the classical one, to encode the abstraction of a factor 3 (reducing 9 nodes into 3). Therefore we add 3 Parameters, as well as a couple more graph update steps. One thing that migth be important later, is that we dont punish the resulting graph structure directly, even though the paramlike decomporession algorithm should make this possible, but only indirectly since a nonsencical graph structure will worsen the quality of the update step.
<i f="none" wmode="True">training plot for feynnp</i>
As you see, the training curve improves after the initial plateau first quite drastically, just to slow down later, and reach a validation loss below #0.05# at the end, which we are fairly happy with, since this means, that converted to booleans, only about 1 in 20 Values is wrong<note since the results are not booleans, this is only true for the average>. More interrestingly, you also see, that the validation loss is consistently lower than the training loss, which means, that at least also this Network does not overfit.
File moved
This diff is collapsed.
This diff is collapsed.
This is pdfTeX, Version 3.14159265-2.6-1.40.20 (MiKTeX 2.9.7250 64-bit) (preloaded format=pdflatex 2020.2.7) 14 SEP 2020 21:06
This is pdfTeX, Version 3.14159265-2.6-1.40.20 (MiKTeX 2.9.7250 64-bit) (preloaded format=pdflatex 2020.2.7) 20 SEP 2020 19:04
entering extended mode
**./main.tex
(main.tex
......@@ -548,6 +548,30 @@ LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
LaTeX Warning: Label `fig:none' multiply defined.
)
......@@ -669,7 +693,7 @@ File: omscmr.fd 2014/09/29 v2.5h Standard LaTeX font definitions
)
LaTeX Font Info: Font shape `OMS/cmr/m/n' in size <12> not available
(Font) Font shape `OMS/cmsy/m/n' tried instead on input line 84.
<../imgs/linear.pdf, id=343, 433.62pt x 650.43pt>
<../imgs/linear.pdf, id=391, 433.62pt x 650.43pt>
File: ../imgs/linear.pdf Graphic file (type pdf)
<use ../imgs/linear.pdf>
Package pdftex.def Info: ../imgs/linear.pdf used on input line 161.
......@@ -696,13 +720,13 @@ LaTeX Font Info: Trying to load font information for U+msb on input line 168
("C:\Program Files\MiKTeX 2.9\tex/latex/amsfonts\umsb.fd"
File: umsb.fd 2013/01/14 v3.01 AMS symbols B
) [2 <../imgs/linear.pdf>]
<../imgs/none.png, id=376, 220.57407pt x 148.30406pt>
<../imgs/none.png, id=424, 220.57407pt x 148.30406pt>
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 174.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[3 <../imgs/none.png>]
<../imgs/dia3.pdf, id=393, 228.855pt x 305.22029pt>
<../imgs/dia3.pdf, id=441, 228.855pt x 305.22029pt>
File: ../imgs/dia3.pdf Graphic file (type pdf)
<use ../imgs/dia3.pdf>
Package pdftex.def Info: ../imgs/dia3.pdf used on input line 205.
......@@ -712,12 +736,12 @@ Package pdftex.def Info: ../imgs/dia3.pdf used on input line 205.
pdfTeX warning: pdflatex (file ../imgs/dia3.pdf): PDF inclusion: invalid other
resource which is no dict (key 'ProcSets', type <array>); ignored.
>] [6] [7]
<../imgs/defttetc.png, id=441, 853.1875pt x 518.93875pt>
<../imgs/defttetc.png, id=489, 853.1875pt x 518.93875pt>
File: ../imgs/defttetc.png Graphic file (type png)
<use ../imgs/defttetc.png>
Package pdftex.def Info: ../imgs/defttetc.png used on input line 244.
(pdftex.def) Requested size: 435.32422pt x 264.78195pt.
<../imgs/add3.pdf, id=442, 433.62pt x 289.08pt>
<../imgs/add3.pdf, id=490, 433.62pt x 289.08pt>
File: ../imgs/add3.pdf Graphic file (type pdf)
<use ../imgs/add3.pdf>
Package pdftex.def Info: ../imgs/add3.pdf used on input line 257.
......@@ -827,7 +851,7 @@ LaTeX Warning: `!h' float specifier changed to `!ht'.
LaTeX Warning: `!h' float specifier changed to `!ht'.
[31] [32] [33]
<../imgs/trivscale.pdf, id=703, 578.16pt x 433.62pt>
<../imgs/trivscale.pdf, id=751, 578.16pt x 433.62pt>
File: ../imgs/trivscale.pdf Graphic file (type pdf)
<use ../imgs/trivscale.pdf>
Package pdftex.def Info: ../imgs/trivscale.pdf used on input line 879.
......@@ -838,7 +862,7 @@ File: ../imgs/none.png Graphic file (type png)
Package pdftex.def Info: ../imgs/none.png used on input line 892.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[35 <../imgs/trivscale.pdf>]
<../imgs/splitscale.pdf, id=756, 433.62pt x 650.43pt>
<../imgs/splitscale.pdf, id=804, 433.62pt x 650.43pt>
File: ../imgs/splitscale.pdf Graphic file (type pdf)
<use ../imgs/splitscale.pdf>
Package pdftex.def Info: ../imgs/splitscale.pdf used on input line 901.
......@@ -849,7 +873,7 @@ File: ../imgs/none.png Graphic file (type png)
Package pdftex.def Info: ../imgs/none.png used on input line 915.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[37 <../imgs/splitscale.pdf>] [38]
<../imgs/410simpledraw16.pdf, id=821, 462.52798pt x 346.89601pt>
<../imgs/410simpledraw16.pdf, id=869, 462.52798pt x 346.89601pt>
File: ../imgs/410simpledraw16.pdf Graphic file (type pdf)
<use ../imgs/410simpledraw16.pdf>
Package pdftex.def Info: ../imgs/410simpledraw16.pdf used on input line 938.
......@@ -869,7 +893,7 @@ File: ../imgs/none.png Graphic file (type png)
Package pdftex.def Info: ../imgs/none.png used on input line 985.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[42]
<../imgs/dist1.pdf, id=895, 433.62pt x 289.08pt>
<../imgs/dist1.pdf, id=943, 433.62pt x 289.08pt>
File: ../imgs/dist1.pdf Graphic file (type pdf)
<use ../imgs/dist1.pdf>
Package pdftex.def Info: ../imgs/dist1.pdf used on input line 1013.
......@@ -968,97 +992,135 @@ File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1356.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[60] [61] [62]
<../imgs/aucmapb.pdf, id=1071, 462.52798pt x 346.89601pt>
[60] [61]
<../imgs/aucmapb.pdf, id=1115, 462.52798pt x 346.89601pt>
File: ../imgs/aucmapb.pdf Graphic file (type pdf)
<use ../imgs/aucmapb.pdf>
Package pdftex.def Info: ../imgs/aucmapb.pdf used on input line 1415.
Package pdftex.def Info: ../imgs/aucmapb.pdf used on input line 1390.
(pdftex.def) Requested size: 435.32422pt x 326.51103pt.
[63 <../imgs/aucmapb.pdf>]
<../imgs/img_doublepeak.pdf, id=1110, 462.52798pt x 346.89601pt>
[62 <../imgs/aucmapb.pdf>]
<../imgs/img_doublepeak.pdf, id=1154, 462.52798pt x 346.89601pt>
File: ../imgs/img_doublepeak.pdf Graphic file (type pdf)
<use ../imgs/img_doublepeak.pdf>
Package pdftex.def Info: ../imgs/img_doublepeak.pdf used on input line 1431.
Package pdftex.def Info: ../imgs/img_doublepeak.pdf used on input line 1406.
(pdftex.def) Requested size: 435.32422pt x 326.51103pt.
[64] [65 <../imgs/img_doublepeak.pdf>]
<../imgs/mabe3.pdf, id=1166, 462.52798pt x 346.89601pt>
[63] [64 <../imgs/img_doublepeak.pdf>]
<../imgs/mabe3.pdf, id=1210, 462.52798pt x 346.89601pt>
File: ../imgs/mabe3.pdf Graphic file (type pdf)
<use ../imgs/mabe3.pdf>
Package pdftex.def Info: ../imgs/mabe3.pdf used on input line 1455.
Package pdftex.def Info: ../imgs/mabe3.pdf used on input line 1430.
(pdftex.def) Requested size: 435.32422pt x 326.51103pt.
[66]
<../imgs/examples.pdf, id=1187, 542.025pt x 289.08pt>
[65]
<../imgs/examples.pdf, id=1231, 542.025pt x 289.08pt>
File: ../imgs/examples.pdf Graphic file (type pdf)
<use ../imgs/examples.pdf>
Package pdftex.def Info: ../imgs/examples.pdf used on input line 1469.
Package pdftex.def Info: ../imgs/examples.pdf used on input line 1444.
(pdftex.def) Requested size: 435.32422pt x 232.1818pt.
[67 <../imgs/mabe3.pdf>]
<../imgs/img_circle.pdf, id=1227, 462.52798pt x 346.89601pt>
[66 <../imgs/mabe3.pdf>]
<../imgs/img_circle.pdf, id=1270, 462.52798pt x 346.89601pt>
File: ../imgs/img_circle.pdf Graphic file (type pdf)
<use ../imgs/img_circle.pdf>
Package pdftex.def Info: ../imgs/img_circle.pdf used on input line 1495.
Package pdftex.def Info: ../imgs/img_circle.pdf used on input line 1470.
(pdftex.def) Requested size: 435.32422pt x 326.51103pt.
[68 <../imgs/examples.pdf>] [69 <../imgs/img_circle.pdf>] [70]
[67 <../imgs/examples.pdf>] [68 <../imgs/img_circle.pdf>] [69]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1544.
Package pdftex.def Info: ../imgs/none.png used on input line 1519.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[71]
[70]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1572.
Package pdftex.def Info: ../imgs/none.png used on input line 1547.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1601.
Package pdftex.def Info: ../imgs/none.png used on input line 1576.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[72] [73]
[71] [72]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1610.
Package pdftex.def Info: ../imgs/none.png used on input line 1585.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[73]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1595.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[74]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1620.
Package pdftex.def Info: ../imgs/none.png used on input line 1606.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[75]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1631.
Package pdftex.def Info: ../imgs/none.png used on input line 1615.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[76]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1640.
Package pdftex.def Info: ../imgs/none.png used on input line 1626.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[77]
[77] [78] [79] [80]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1651.
Package pdftex.def Info: ../imgs/none.png used on input line 1706.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[78] [79] [80]
Package atveryend Info: Empty hook `BeforeClearDocument' on input line 1703.
[81]
Package atveryend Info: Empty hook `AfterLastShipout' on input line 1703.
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1717.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[82]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1726.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[83] [84]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1759.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1768.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[85] [86]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1798.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[87]
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1809.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
File: ../imgs/none.png Graphic file (type png)
<use ../imgs/none.png>
Package pdftex.def Info: ../imgs/none.png used on input line 1818.
(pdftex.def) Requested size: 435.32422pt x 292.70605pt.
[88] [89]
Package atveryend Info: Empty hook `BeforeClearDocument' on input line 1859.
[90]
Package atveryend Info: Empty hook `AfterLastShipout' on input line 1859.
(main.aux)
Package atveryend Info: Executing hook `AtVeryEndDocument' on input line 1703.
Package atveryend Info: Executing hook `AtEndAfterFileList' on input line 1703.
Package atveryend Info: Executing hook `AtVeryEndDocument' on input line 1859.
Package atveryend Info: Executing hook `AtEndAfterFileList' on input line 1859.
Package rerunfilecheck Info: File `main.out' has not changed.
(rerunfilecheck) Checksum: CC703BE35475EF39A7B2C21AAEBEB8C5;5997.
(rerunfilecheck) Checksum: 29E82BBAFC7E7E199DD03BFA26042A90;6950.
LaTeX Warning: There were multiply-defined labels.
Package atveryend Info: Empty hook `AtVeryVeryEnd' on input line 1703.
Package atveryend Info: Empty hook `AtVeryVeryEnd' on input line 1859.
)
Here is how much of TeX's memory you used:
9561 strings out of 492484
141322 string characters out of 3130558
245951 words of memory out of 3000000
13332 multiletter control sequences out of 15000+200000
9632 strings out of 492484
142338 string characters out of 3130558
246628 words of memory out of 3000000
13356 multiletter control sequences out of 15000+200000
11502 words of font info for 45 fonts, out of 3000000 for 9000
1141 hyphenation exceptions out of 8191
41i,13n,45p,3416b,395s stack positions out of 5000i,500n,10000p,200000b,50000s
......@@ -1071,16 +1133,16 @@ onts/type1/public/amsfonts/cm/cmmi5.pfb><C:/Program Files/MiKTeX 2.9/fonts/type
amsfonts/cm/cmmi7.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/public/amsfonts/
cm/cmmi8.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/public/amsfonts/cm/cmr10.
pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/public/amsfonts/cm/cmr12.pfb><C:/P
rogram Files/MiKTeX 2.9/fonts/type1/public/amsfonts/cm/cmr6.pfb><C:/Program Fil
es/MiKTeX 2.9/fonts/type1/public/amsfonts/cm/cmr7.pfb><C:/Program Files/MiKTeX
2.9/fonts/type1/public/amsfonts/cm/cmr8.pfb><C:/Program Files/MiKTeX 2.9/fonts/
type1/public/amsfonts/cm/cmsy10.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/pu
blic/amsfonts/cm/cmsy6.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/public/amsf
onts/cm/cmsy7.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/public/amsfonts/cm/c
msy8.pfb>
Output written on main.pdf (81 pages, 1652962 bytes).
rogram Files/MiKTeX 2.9/fonts/type1/public/amsfonts/cm/cmr5.pfb><C:/Program Fil
es/MiKTeX 2.9/fonts/type1/public/amsfonts/cm/cmr6.pfb><C:/Program Files/MiKTeX
2.9/fonts/type1/public/amsfonts/cm/cmr7.pfb><C:/Program Files/MiKTeX 2.9/fonts/
type1/public/amsfonts/cm/cmr8.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/publ
ic/amsfonts/cm/cmsy10.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/public/amsfo
nts/cm/cmsy6.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/public/amsfonts/cm/cm
sy7.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/public/amsfonts/cm/cmsy8.pfb>
Output written on main.pdf (90 pages, 1696112 bytes).
PDF statistics:
1545 PDF objects out of 1728 (max. 8388607)
390 named destinations out of 1000 (max. 500000)
751 words of extra memory for PDF output out of 10000 (max. 10000000)
1689 PDF objects out of 1728 (max. 8388607)
437 named destinations out of 1000 (max. 500000)
847 words of extra memory for PDF output out of 10000 (max. 10000000)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment