This page concentrates on the compilation process and related central modeling concepts of vvvv50.
The main thing to be said is that we distinguish between
and finally
The process of compilation therefore is seperated into 2 steps:
A platform registry registers all platform specific services like
Source code is meant to be saved/loaded from disk or sent to another process, like an external runtime. Therefore it is structured in a way that each element can be addressed by an ID (string) and everything in it needs to be serializable.
Currently it is a mutable model: Executing commands changes the source model.
To be able to edit values within the code we need to have editors that can do the job for a specific type.
Each type that wants to be treated natively by the GUI therefore needs to support
Now: our HDE runs on .NET. That means that when we deal with runtime values we deal with values of a CLR-Type. The most natural way of describing those values and runtimetypes is this:
Source Code elements that store values have those properties.
For user contributed types we probably need a way to register certain dlls at the HDE. It is still to discuss if this is a per solution/project property or a static thing that you do once for your programming environment.
Symbols are platform independant. You won't find any .NET specific method or parameter defintions or types in here:
// base class for the symbolic model
Symbol
// identifies a symbol
SymbolID
// identifies a global symbol
GlobalSymbolID : SymbolID
// represents any type
Type: Symbol
// a type parameter is an undefined type placeholder. it has some constraints for its application later on. atm a GenericNodeReference (= a generic method) or a GenericProductTypeDefinition (= a generic struct or type) can have type parameters.
TypeParameter: Type
// just not a type parameter. a concrete type exists and is globally accessable
ConcereteType: Type
// built in numeric values. several instances with different properties may coexist
Numeric : ConcereteType
// all string text related types
Text : ConcereteType
// a product type is something like a struct or class. it is somewhat the product of its fields
ProductType : ConcereteType
// a field of a product type
ProductTypeField: Symbol
// a generic product type has one or more type parameters
GenericProductTypeDefinition : ProductType
// a generic product type application replaces all type parameters with types. while building that, you need to have a look at the constraints of the parameter and either use a type argument that statically fulfills those constraints, or build a new type parameter that has at least equivalent constraints
GenericProductTypeApplication : ProductType
// the output type of a node abstraction, that results from striking out pins
AbstractionType
// everything that gets in touch with data
DataHub: Symbol
// a pin application within a node application
PinApplication: DataHub
// the application of a node results from putting a node into a patch.
NodeApplication: Symbol
// a pin reference is part of a node reference. node references can result from node collectors.
PinReference: DataHub
// a node reference could result from parsing assemblies or other projects for nodes. it therefore doesn't offer body details
NodeReference: Symbol
// a pin definition is the same as the reference, but also has a visual representation in the source code: the big grey quad. It is part of a node definition.
PinDefinition: PinReference
// a field accessor only exists in the node definition. it won't directly affect the node reference. the compiler needs it to read/write fields of the state
FieldAccessor: DataHub
// a node definition results from a patch. the definition is the same as the reference but also gives insight into the body
NodeDefinition: NodeReference
// a compilation offers an overview over all node definitions. it also offers an overview of all static nodes at the time of compilation.
Compilation
Symbols get created by the frontend compiler and by node collectors that scan external references.
Target Code is platform specific. It may be necessary to have different implementations of one node for different platforms.
Basically a platform needs to be able to collect nodes, compile node definitions into target code and run target code.
All those 3 services will need to exchange some information that is valueable for the specific platform. In the following we only look at the interface to the application.
Factory
Let's have a look at platform / implementation specific elements:
NodeImplementationDescription
// a platfrom specific service that collects nodes outof external references
INodeCollector
BackendCompilerResult
IBackendCompiler
IRuntime
Target Code is platform specific code that can be run on a specific platform. It is generated by the backend compiler.
Let's do that example with our CIL backend compiler in mind.
Its job is to create types and methods in CIL (its target code).
To do that job it has to have access to the CIL methods that are implicitly used in the body of a patch (by using node applications).
Some of those CIL methods are generated by the compiler itself, others are loaded by collecting nodes of a CIL assembly.
Let's focus on the static CIL methods, that are loaded when a CIL assembly reference is added:
A CILNodeCollector then needs to generate the following lookup table:
In general we have node collectors for different types of platforms.
Node collectors get triggered when a reference is added to the project and as a result collect node references (symbols) for that project and install them in the projects node factory.
When they encounter something that should get a node they check the factory if the node reference already exists. If not they let the factory create a new one, else they just take the exisiting one.
The only little trick is that they also need to associate the original platform specific method in a dictionary which the node defintion resulted from.
In our case a .NET assmebly is parsed and CCI IMethodDefinitions are stored.
It also could store IParameterInfos or what ever it would help to do the compilation job at the end.
Now if the compiler can access those dictionaries, it can do its job and create target code that calles the platform specific methods.
The main task of the compiler is to again create such a lookup table - this time for the node definitions resulting from patches.
Not all node references are available for all platforms. In our case the StaticCILMethods of the node collector would just lack some entries. So it may be that a backend compiler can't compile all node definitions. The backend compiler reflects that in the backend compiler result: It just enumerates all node definitions that were compiled.
It is important that the backend compiler only creates new target code for new symbols. Why? See: Runtime...
The Runtime should be friends with the compiler and the node collector to be able to both lookup table for static and dynamic nodes.
A runtime can be thought of an instance of the statetype of the node definition that is given on creation of the runtime.
It can
Reusing old platform specific values is the difficult part. For that to work it is important that unchanged types (in the symbols) lead to unchanged types in the target code.
Solange der Sourcecode mutable ist kann jedes Sourcecode Element über Changed flags verfügen.
Der Sourcecode ist Symbolfrei.
Es gibt einen Lookup (Dicotionary) von Source Code Elementen nach aktuell gültigen Symbolen.
Bei Beginn des Compilevorgangs werden alle Änderungen auf false gesetzt und gleichzeitig die alten Symbole aus dem Lookup gelöscht.
Spätere Compilerschritte werden dadurch feststellen, daß bestimmte erzeugt werden müssen und welche wiederverwendet werden können.
kurzum: Symbol = LookuSource;
commands führen zu changed flags im source.
beim beenden einer commandabfolge (z.b. beim hinzufügen zur history) wird die komplette solution traversiert und alle implizit geänderten teile auch als changed markiert.
STATE: Changing
a) markiere ein element als changed und wenn noch nicht changed gewesen, dann sofort selbst markieren und als reaktion:
b) owner changed
c) wenn sich die linkliste eines patches ändert werden alle nodes und pins des patches als changed markiert
d) wenn sich ein patch ändert werden alle noderefs changed
e) wenn sich die in/outletliste ändert wird die in/ouputliste der ref. knoten changed
f) wenn sich ein in/outlet ändert wird der in/output der ref. knoten changed
STATE: Changes_Done
Man könnte nun auf weitere Changes warten. Entweder kein Autocompile oder autocompile wartet noch kurz ab...
Kommen neue Änderungen hinzu können diese mit obigem System weiter hinzugefügt werden.
Nun müssen neue semantische Objekte bereitgestellt werden. Diese können erstmal völlig nichtssagend sein und nach und nach aufgefüllt werden.
STATE: Rebuilding_Semantics
a) LastRootSymbol = RootSymbol; LastSymbolLookup = SymbolLookup.Clone(); (..)
b) Traversieren des Models. Für changed sources
//Sync(SourceCodeElement el, Dictionary<SourceCodeElement, object> lookupSemantic, Func<SourceCodeElement, object> creator)
STATE: NewSymbols_Infering_Types
STATE: NewSymbols_Building_Signatures
STATE: NewSymbols_Building_Bodies
The most important backend compiler for now is the CIL-Compiler that generates IL Code for the .NET platform. Other backend compilers could address platforms like OpenGLShader, DirectXShader, Arduino, Raspberry Pi, Javascript...
The idea would be that nodes can exist for different platforms / computing devices. Let's say that + (Math) and Sin (Math) exist for the CIL platform and for the Arduino platform. If you now manage to build a patch that only uses those nodes, that exist for those 2 platforms, the resulting new node will also exist for those 2 platforms.
In other words: The backend compilers can only finish their work if all of the used nodes are already available on the specific platform.
Now if you want to expose some node to be available for a certain backend compiler / platform there are two scenarios what to do:
In either case additional refernces for the platform have to be added to the project, so that the compiler can access them.
A backend compiler would now lookup all nodes in a patch, retrieve their nodedefintions and then would lookup the platform specific methoddefintions.
It would move on with creating a platform specific target code (IL, javascript, C, x86 binary, ...) and call the platform specific methods at the appropriate places.
a backendcompiler maps symbols to target code, that its platform can run.
it is his responsibility to only once map a specific immutable symbol to an immutable target code.
if it does so it just eases the job for its platform specific runtime to get state from one revision to the next.
it is pretty easy however to do so:
a backend compiler would just need to store a lookup for all target code that has been succesfully compiled by itself:
so basically for all symbols that get compiled by the backend compiler a lookup needs to be installed (either one object->object or a list of typed lookups) and when a new frontend compilation is compiled by the backend it would check if the lookup offers a target symbol already. it then would just take that. otherwise it would generate a new one.
after compilation is done old symbols should be removed.
the platform specific runtime however would always hold two revisions of types, to be able to get reuse values in the state from the old revision.
STATE: NewSymbols_Building_CCI_Metadata
STATE: NewSymbols_Building_CCI_AST
STATE: NewSymbols_Emitting_To_Dynamic_Assembly
STATE: NewSymbols_Building_State
STATE: Setting new entry point
anonymous user login
~2d ago
~2d ago
~10d ago
~12d ago
~13d ago
~17d ago
~17d ago
~24d ago
~1mth ago
~1mth ago