dithering is of course entirely possible on CPU
the main reason for having a cpu image pipeline is to handle things like this
there is an example of per-pixel manipulation in the EmguCV template
if you're happy with c# then you can probably just jump in and implement the pseudocode in that template
the most obvious way (to me) of doing this on GPU is:
find the 2 closest output palette values to your input position
Select from those 2 based on your pixel's position in the screen (you'd need to calculate the pixel position youself based on your projection space position, as afaik, HLSL doesn't offer a simple function for absolute pixel position)
one issue is there isn't a good way in VVVV to get from GPU back to CPU
so if you want to dither something you're rendering in an EX9 renderer, then you'll need to dither on GPU, or wait for Texture inputs on plugins
@robe - AsImage will come
but it's not so simple to create right now
with the new vvvv-sdk i might be able to look more into it
but ideally need an example of a Texture input on a plugin
@manuel - more advanced background subtraction models will be available in later releases.
There's lots of models in OpenCV, such as the gaussian adaptive model which seems to be an all round winner
to note: the contour isn't updated at 50fps here (the camera feed's only 30fps)
the main graph is running at 50fps
YEEEEEEEEEEAAAAAAAAA!!!!!!!!
:)
boooooom
do you think, that things like floyd-steinberg-dithering-in-a-shader are possible with your technique?
hey @sebl
interesting
dithering is of course entirely possible on CPU
the main reason for having a cpu image pipeline is to handle things like this
there is an example of per-pixel manipulation in the EmguCV template
if you're happy with c# then you can probably just jump in and implement the pseudocode in that template
otherwise, there are approximations of dithering that can be done on the GPU
here's a quick result:http://kineme.net/forum/Applications/Compositions/GLSLDitheringandCrosshatching
the most obvious way (to me) of doing this on GPU is:
one issue is there isn't a good way in VVVV to get from GPU back to CPU
so if you want to dither something you're rendering in an EX9 renderer, then you'll need to dither on GPU, or wait for Texture inputs on plugins
elliot
ai, the point is that this particular floyd-steinberg dithering can't be done in a shader (as discussed in the thread).
so i did an ultra-slow c# version using pipet to get the image into the plugin (i consider it a preparation until imageinput is implemented)
then i did a freeframe version - with the bottleneck in asvideo. (just a bit faster)
...
the link you posted describes ordered dithering that can be done in a shader though (like the halftone one does yet)
since floyd-steinberg requires a fill function (i.e. start somewhere, move around the image) it's CPU only
you can implement it now in a EmguCV filter following the template example
but there's no 'AsVideo' equivalent for opencv yet
ah, so, for now i can only dither any cam or video input
AsCVImage with a Texture In (DX9) pin would be useful...
... but ... It will be possible?
@robe - AsImage will come
but it's not so simple to create right now
with the new vvvv-sdk i might be able to look more into it
but ideally need an example of a Texture input on a plugin
wooooww
the difference mode pin... what other modes are? is there a dynamic bg substraction??
@manuel - more advanced background subtraction models will be available in later releases.
There's lots of models in OpenCV, such as the gaussian adaptive model which seems to be an all round winner