Any suggested design patterns for centralizing UI state?

In the 2019.10000 series of builds, this is exactly what binding and custom parameters are for. The UI states should be put into custom parameters where you use to ‘use’ them, and those custom parameters should be the master for a bind between them and the Value0 parameter on the Widget UI components. This way the widget UI components can change, but the functionality of the system doesn’t break. Also the system can work entirely without the UI as well when it’s setup this way.
This allows for more easily driving the system from some other source as well, such as OSC or MIDI

Hey Malcom, are there any tutorials or op snippets etc. about best practices for working with widgets/bindings? I’ve been playing a bit, but if there were some centralized resources explaining when to bind as master vs reference, etc. that would be helpful. Thanks!

Videos on this are coming soon.

On my first Touchdesigner project I tried to the best of my abilities to emulate an MVC (Model-View-Controller) style UI implementation. While such an approach is very successful and used in most OO languages and platforms in the object oriented programming world, the exact implementation seemed clumsy and non-intuitive in Touchdesigner. What follows is a speculative rant about the trouble I saw in implementing this design pattern in Touch Designer.

There are several issues I have identified with such an approach - the first and most prominent is that developing in touch designer is not at all the same as developing in a object oriented world.

The contrast is most obvious when observing that in an ideal MVC plan you have your view code entirely isolated from the ‘model’ and the ‘controller’ code. The view code, ideally, only responds to the changes in state of the underlying model. In an object oriented language one can simply react to changes on the publicly accessible properties of an object. Where might we find such a ‘model’ for a replicated node or indeed, any node in general? The answer is that a lot of plumbing needs to be added to accomplish that because there is no such thing as a ‘class’ amongst assemblages of TD nodes.

My attempt to create ‘models’ in touch designer was to create table dats that listed properties, but the work needed to create these always takes more time than other approaches. This technique is ultimately obscure and didn’t stick with the other devs on the project. It is an effort to shunt in an object oriented idea into a visual language’s node-based paradigm.

I do not doubt that a developer could refine something like an OO MVC pattern in TD into a more usable state than I did, but I feel like the absence of ‘true’ models in touchdesigner denies an important axiom of their development.

Now, that sounds a little negative - denied axioms, lacking models, etc. Really though all that is required is a node-based design pattern. I have not yet seen such a pattern and not having a wealth of experience on the platform I haven’t yet come up with one for this problem.

One approach to achieve something like an MVC architecture:

Your “model” is a hierarchy of (Base) COMPs which each have some custom parameters, and internally use the values of those parameters to produce some kind of output. For example, a video mixer component could take two TOP inputs, and have parameters for the blend mode and crossfade amount. Internally, it uses the values of those to combine the two inputs into a TOP output. These components can run on their own and don’t know or care about anything UI related.

Your “view” can be a separate hierarchy (Container/Widget/etc) COMPs which bind controls to the parameters on the “model” components. You can even have the “view” part run in a separate process (or on another machine), piping values over to the “model” system through OSC or Touch IN/OUT OPs.

Coming from a (traditional) programming background, I’ve found that trying to force traditional design patterns onto it can result in systems that are brittle and inefficient. It’s a matter of picking aspects that apply well and forgoing those that don’t. The MVC approach does tend to fit pretty well, especially for UI development. But basically anywhere that’s in the “critical path” of the system (as in, stuff that executes on every single frame or has a lot of data throughput) works best when sticking to TD’s native data-flow concepts as much as possible. Then you just wrap those parts in components with a well defined interface (custom parameters).

Jarrett pulled heavily from MVC design patterns when creating widgets and with the creation of binding. I’m working on a project with widgets right now it’s really clean working this way IMO.

Hi,
I watched the 7 parts of the Widgets tutorial and build a UI based on that, which works great. Where I am stuck now is connecting an actual MIDI controller to it, together with some additional channels coming from the keyboard.
As far as I understand binding works only between parameters, but how can you drive a parameter from a chop/channel, since any reference expression disables the binding?

I put together a chopExecute that listens to the midi output and updates the ui parameters, which seems to work. But this approach doesn’t seem scalable and on a stress test with 100 channels driving 100 parameters, the script started stalling.

def onValueChange(channel, sampleIndex, val, prev):
command = ‘parent().par.’+channel.name+‘=’+ str(val)
exec( command )
return

Not sure if the bottleneck is the exec() function. Is there maybe a way to access a parameter given a string value? Something like
me.par(‘Value0’)=1