I'm using the asset Embedded Browser asset to show website content in info panels that will appear and disappear as the user requests them. The Touchscript library is doing a fantastic job in moving and interacting with the containers surrounding the browser instance but there's a situation where I stop being able to interact with the browser instance and I'm wondering if in any way Touchscript is doing something under the hood that might be involved because I can't find anything yet in the whole Embedded Browser asset. Here's what's going on, starting with some context:
- I'm using a 70" PCAP multitouch-sensitive screen. The multitouch input is provided to the computer through a usb cable
- The Embedded Browser asset maps the mouse pointer to an x/y coordinate in its browser instance frame and transports mouse button clicks to the browser input as well. I've extended that interface class to take Touchscript PressGesture and ReleaseGesture Gestures to set the left mouse to true or false and a TransformGesture to provide an alternative x/y coordinate. Ie. I'm faking the mouse input and as far as Embedded Browser knows it's receiving mouse input.
- The above solution works both in the Editor and in the Standalone Player, BUT... only if I have the touchscreen's usb cable unplugged. AND... only when the usb cable is unplugged when the browser instance is created. Once it's on the screen I can connect the usb cable, Windows does its little 'found hardware' sound and I can successfully interact with the browser instance through Touchscript. However, If the usb cable is plugged in when I instantiate the browser instance, no dice. Through debug logging I know that the asset is getting touch input data.. it just isn't resulting in actual browser interaction in the circumstances described here.
If you're thinking of telling me that the smoking gun is pointing fully to the Embedded Browser plugin I would agree with you, however I've combed through its code and I don't see any if-then clauses or switch cases that react to having underlying hardware detected. I've contacted the developer and he's also confused. I've started looking through Touchscript sources, so far obviously no insights. That's why I'm asking: Does Touchscript actively detect underlying touch-input surfaces? I'd love to know where that happens, hopefully that might help me bug-hunt better.
Once again sorry if this is vastly off-topic. Hopefully it will spawn some useful insights for you and for me.
ps -> in the Standard Input panel I have the Win8+ and Win7 API's set to 'Unity', but am I correct in thinking that these settings are only relevant to standalone builds?