Multiple displays issue with camera depth

Ask for help and post bugs.

Re: Multiple displays issue with camera depth

Postby Timo » Wed Oct 25, 2017 11:00 pm

Thanks. I already have the code & algorithm to convert the urg data to x y coordinates, in C# from the manufacturer. http://urgnetwork.sourceforge.net/html/library_tutorial_page.html Coordinate conversion code is at the bottom of the page. I have two projectors each with a scanner at the centre of each screen, the size of each screen is 5m wide x 3m high.

I'll have a look at the tuioInput class and see what's possible.

Don't think they are tuio capable unfortunately.

Timo.
Timo
 
Posts: 17
Joined: Mon Oct 23, 2017 2:48 am

Re: Multiple displays issue with camera depth

Postby Timo » Tue Nov 07, 2017 8:18 pm

Hi again, I have been trying to find a tuio option for these scanners and without spending 2k$ to buy software, it's not going to be an option.
I had a look through the tuio input scrio t as you suggested, but I don't think I need to go that deep.
The scripts I have give me a Vector3 direction which I can use as a touch position. What is the easiest way for me to convert this Vector3 direction to a screen x y that touchscript can use?
My first instinct is to just map the Vector3 x y to a mouse position x y and go from there, however can you suggest a more native method for touchscript please?
If it just as simple as using a coordinate remapper setting up, I will have a go.

Thanks, Timo
Timo
 
Posts: 17
Joined: Mon Oct 23, 2017 2:48 am

Re: Multiple displays issue with camera depth

Postby valyard » Wed Nov 08, 2017 12:34 am

Added a tutorial on writing custom input sources.
https://github.com/TouchScript/TouchScr ... put-Source
Let me know if you have any questions.
valyard
Site Admin
 
Posts: 422
Joined: Mon Sep 08, 2014 11:57 pm

Re: Multiple displays issue with camera depth

Postby Timo » Wed Nov 08, 2017 1:42 pm

That's awesome, thank you. I will crack on with it now and let you know how it goes.

Many many thanks sir.


Timo.
Timo
 
Posts: 17
Joined: Mon Oct 23, 2017 2:48 am

Re: Multiple displays issue with camera depth

Postby Timo » Wed Nov 08, 2017 2:46 pm

Wow, a lot to comprehend there.
I think I can make it work though.
Here's the current system I have and the options for pulling coordinates. (Please forgive the essay but this is going to help me comprehend which data I can use to convert via input source.)
For each laser scan step, three lists are added to. One for distance, one for strength and a third for obstacles detected. Distance of the laser scan from the laser to an obstacle, is converted to Vector4 x, y, z and strength data as w, to the lists via a conversion algorithm. I'm not using the strength list as I think I just need the termination coordinates of a scan beam.
In my test project, a mesh mesh with mesh filter is used to convert these lists to procedural geometric data. The distance list sets verticies in 3d space via a Vector3 position and then uv data is set in a Vector2 position.
Mesh verts are created at the sensor position down to the terminated beam position, visually (& geometrically) in the viewport and in the game screen in standalone mode.
This allows me to see the scan data in realtime.
So I believe I can take the termination position of a scan beam or probably an obstacle position, as the touchpoint start, using a screen sized rectangle to constrain data to the actual screen area.
When the same scan beam returns to its full length, that is my touchpoint end, if a 'gesture' is not happening of course and a single touch occurs. Any scan beams crossed, when making a 'gesture' across the screen, need to be used as a touch drag between the touch start and touch end points.
My thought is to extract this positional data and pass the results to touchscript instead of generating the mesh, at this point in the code when the mesh generation occurs after the scan data is passed as a Vector3 position. That way I have essentially got the scan beams termination point in 3d space, or 2d as a Vector2 used by the uv data, and all the calculations are already processed and returned as valid data.
The end result should allow single touches on objects in the scene and dragging of scene objects, which I already have set up and tested as working, using touchscript and mouse input.

Here is a video of the test scene I'm using, to help explain the above. The cyan object is the procedural mesh generated as 1 item per scan beam. The touches are translated as obstacles which cause the splash code to kick in. (Which could be the alternative point in the code to send data to touchscript.)
https://youtu.be/svYKJ91GrZ4

I also have this as a purely visual test scene which uses debug.draw for the beams instead of a procedural mesh.
https://youtu.be/C_jgkjbwKiM

Again sorry for the lengthy explanation but it helps me to understand and also gives you an insight into what I'm working with and trying to achieve.

Timo.
Timo
 
Posts: 17
Joined: Mon Oct 23, 2017 2:48 am

Re: Multiple displays issue with camera depth

Postby Timo » Thu Nov 09, 2017 2:30 am

Hi valyard,

there is an issue in {public class MyInputSource : InputSource} with screenWidth and screenHeight not being declared. I have added the following, but wanted to check if I should be using pixel width and height, or the screenWidth = 1 and screenHeight = 1 as per 0, 0 1, 1 screen size from bottom left corner to top right.

Code: Select all
       
// Device touch area width in meters
public float Width = 5;
// Device touch area height in meters
public float Height = 3;
//camera for this screen
public Camera screenCam;
//camera screen width
public float screenWidth;
//camera screen height
public float screenHeight;


Code: Select all
private void pointerPressedHandler(object sender, DeviceProxyEventArgs e)
        {
            // added
            screenWidth = screenCam.pixelWidth;
            screenHeight = screenCam.pixelHeight;

            Debug.LogFormat("Pointer {0} added at {1}.", e.Id, e.Position);
            lock (this)
            {
                deviceIdToTouch.Add(e.Id, internalAddTouch(new Vector2(e.Position.x / Width * screenWidth, e.Position.y / Height * screenHeight)));
            }
        }

        private void pointerMovedHandler(object sender, DeviceProxyEventArgs e)
        {
            Debug.LogFormat("Pointer {0} moved to {1}.", e.Id, e.Position);
            lock (this)
            {
                TouchPointer touch;
                if (!deviceIdToTouch.TryGetValue(e.Id, out touch)) return;

                // Update to new position
                touch.Position = remapCoordinates(new Vector2(e.Position.x / Width * screenWidth, e.Position.y / Height * screenHeight));
                updatePointer(touch);
            }
        }


Edit, Alternative method I was referring to above:
Code: Select all
Vector3 worldPosition;
Vector3 viewPos = screenCam.WorldToViewportPoint(worldPosition);
screenWidth = viewPos.x;
screenHeight = viewPos.y;


Thanks, I'm modifying the tutorial example to use my device now.
Timo.
Timo
 
Posts: 17
Joined: Mon Oct 23, 2017 2:48 am

Re: Multiple displays issue with camera depth

Postby valyard » Thu Nov 09, 2017 12:48 pm

Sorry, I haven't uploaded yet the update where I modified InputSource a bit.
This is what you are missing dealing with screenWidth and screenHeight. These are just cached Screen.width and Screen.height.
valyard
Site Admin
 
Posts: 422
Joined: Mon Sep 08, 2014 11:57 pm

Re: Multiple displays issue with camera depth

Postby Timo » Thu Nov 09, 2017 2:05 pm

Ok sir. I think I have things set up. My device urgDevice has this API for connecting to and talking to the scanners
Code: Select all
using System;
using System.Collections.Generic;

namespace URG
{
    /// <summary>
    /// UrgDevice is the abstract class every URG device derives from.
    /// </summary>
    [Serializable]
    public abstract class URGDevice
    {
        /// <summary>
        /// Commands defined by SCIP 2.0.
        /// See also : https://www.hokuyo-aut.jp/02sensor/07scanner/download/pdf/URG_SCIP20.pdf
        /// </summary>
        protected enum SCIPCommands
        {
            VV, PP, II,
            BM, QT,
            MD, GD,
            ME         
        }
/// <summary>
        /// Connection type of URG sensor.
        /// </summary>
        public enum URGType { Serial, Ethernet }


        protected List<long> distances;
        /// <summary>
        /// List of the distance data captured by the URG device.
        /// </summary>
        public List<long> Distances { get { return distances; } }

        protected List<long> intensities;
        /// <summary>
        /// List of the intensity data captured by the URG device.
        /// </summary>
        public List<long> Intensities { get { return intensities; } }

        /// <summary>
        /// Establish connection with the URG device.
        /// </summary>
        public abstract void Open();

        /// <summary>
        /// End connection with the URG device.
        /// </summary>
        public abstract void Close();

        /// <summary>
        /// Write data to the URG device.
        /// </summary>
        /// <param name="data"></param>
        public abstract void Write(string data);

        /// <summary>
        /// Connection status between sensor and host.
        /// </summary>
        public abstract bool IsConnected { get; }

        /// <summary>
        /// Smallest step number for capturing data from the URG device.
        /// </summary>
        public abstract int StartStep { get; }

        /// <summary>
        /// Largest step number for capturing data from the URG device.
        /// </summary>
        public abstract int EndStep { get; }

        /// <summary>
        /// Number of steps per degree * 360
        /// </summary>
        public abstract int StepCount360 { get; }
    }
}


Then this script asks for the data:

Code: Select all
using System;
using UnityEngine;
using System.Collections.Generic;
using System.Net;
using System.Net.Sockets;
using SCIP_library;
using System.Threading;
using System.Text;

namespace URG
{
    [Serializable]
    public class EthernetURG : URGDevice
    {
        [SerializeField]
        readonly IPAddress ipAddress;

        [SerializeField]
        readonly int port;

        readonly static public URGType DeviceType = URGType.Ethernet;

        TcpClient tcpClient;
        Thread listenThread = null;

        bool isConnected = false;
        public override bool IsConnected { get { return isConnected; } }
       
        public override int StartStep { get { return 300; } }
        public override int EndStep { get { return 807; } }
        public override int StepCount360 { get { return 1440; } }

        /// <summary>
        /// Initialize ethernet-type URG device.
        /// </summary>
        /// <param name="_ipAddress">IP Address of the URG device.</param>
        /// <param name="_port">Port number of the URG device.</param>
        public EthernetURG(string _ipAddress = "192.168.0.10", int _port = 10940)
        {
            ipAddress = IPAddress.Parse(_ipAddress);
            port = _port;

            distances = new List<long>();
            intensities = new List<long>();
        }

        /// <summary>
        /// Establish connection with the URG device.
        /// </summary>
        public override void Open()
        {
            try
            {
                tcpClient = new TcpClient();
                tcpClient.Connect(ipAddress, port);
                listenThread = new Thread(new ParameterizedThreadStart(HandleClient));
                isConnected = true;
                listenThread.IsBackground = true;
                listenThread.Start(tcpClient);
            }
            catch (Exception e)
            {
                Debug.LogException(e);
            }
        }

        /// <summary>
        /// End connection with the URG device.
        /// </summary>
        public override void Close()
        {
            if (listenThread != null)
            {
                isConnected = false;
                listenThread.Join();
                listenThread = null;
            }


            if (tcpClient != null)
            {
                if (tcpClient.Connected)
                {
                    if (tcpClient.GetStream() != null)
                    {
                        tcpClient.GetStream().Close();
                    }
                }
                tcpClient.Close();
            }
        }

        void HandleClient(object obj)
        {
            try
            {
                using (TcpClient client = (TcpClient)obj)
                using (NetworkStream stream = client.GetStream())
                {
                    while (isConnected)
                    {
                        try
                        {
                            long timeStamp = 0;
                            string receivedData = ReadLine(stream2);
                            string parsedCommand = ParseCommand(receivedData);

                            SCIPCommands command = (SCIPCommands)Enum.Parse(typeof(SCIPCommands), parsedCommand);
                            switch (command)
                            {
                                case SCIPCommands.QT:
                                    distances.Clear();
                                    intensities.Clear();
                                    isConnected = false;
                                    break;
                                case SCIPCommands.MD:
                                    distances.Clear();
                                    SCIP_Reader.MD(receivedData, ref timeStamp, ref distances);
                                    break;
                                case SCIPCommands.GD:
                                    distances.Clear();
                                    SCIP_Reader.GD(receivedData, ref timeStamp, ref distances);
                                    break;
                                case SCIPCommands.ME:
                                    distances.Clear();
                                    intensities.Clear();
                                    SCIP_Reader.ME(receivedData, ref timeStamp, ref distances, ref intensities);
                                    break;
                                default:
                                    Debug.Log(receivedData);
                                    isConnected = false;
                                    break;
                            }
                        }
                        catch (Exception e)
                        {
                            Debug.LogException(e);
                        }
                    }
                }
            }
            catch (Exception e)
            {
                Debug.LogException(e);
            }
        }

        string ParseCommand(string receivedData)
        {
            string[] split_command = receivedData.Split(new char[] { '\n' }, StringSplitOptions.RemoveEmptyEntries);
            return split_command[0].Substring(0, 2);
        }

        /// <summary>
        /// Read to "\n\n" from NetworkStream
        /// </summary>
        /// <returns>receive data</returns>
        protected static string ReadLine(NetworkStream stream)
        {
            if (stream.CanRead)
            {
                StringBuilder sb = new StringBuilder();
                bool is_NL2 = false;
                bool is_NL = false;
                do
                {
                    char buf = (char)stream.ReadByte();
                    if (buf == '\n')
                    {
                        if (is_NL)
                        {
                            is_NL2 = true;
                        }
                        else
                        {
                            is_NL = true;
                        }
                    }
                    else
                    {
                        is_NL = false;
                    }
                    sb.Append(buf);
                } while (!is_NL2);

                return sb.ToString();
            }
            else
            {
                return null;
            }
        }

        protected static bool TCPWrite(NetworkStream stream, string data)
        {
            if (stream.CanWrite)
            {
                byte[] buffer = Encoding.ASCII.GetBytes(data);
                stream.Write(buffer, 0, buffer.Length);
                return true;
            }
            else
            {
                return false;
            }
        }

        /// <summary>
        /// Write data to the URG device.
        /// </summary>
        /// <param name="data"></param>
        public override void Write(string data)
        {
            try {
                if (!isConnected) {
                    Open();
                }
                if (Enum.IsDefined(typeof(SCIPCommands), ParseCommand(data))) {
                    TCPWrite(tcpClient.GetStream(), data);
                }
            }
            catch (Exception e) {
                Debug.LogException(e);
            }
        }
    }
}


And here I can get distances[i].length etc.
Code: Select all
    void Start()
    {
        if (useEthernetTypeURG)
        {
            urg = new EthernetURG(ipAddress, portNumber);
        }


        urg.Open();

        urgStartStep = urg.StartStep;
        urgEndStep = urg.EndStep;

        distances = new long[urgEndStep - urgStartStep + 1];

        DetectedObstacles = new Vector4[urgEndStep - urgStartStep + 1];
    }
    void Update()
    {
        if (urg.Distances.Count() == distances.Length)
            distances = urg.Distances.ToArray();
       
        UpdateObstacleData();
}

    void UpdateObstacleData()
    {
        for (int i = 0; i < distances.Length; i++)
        {
            Vector3 position = scale * Index2Position(i) + PosOffset;
            if (IsOffScreen(position) || !IsValidDistance(distances[i]))
            {
                distances[i] = 0;
            }
            DetectedObstacles[i] = new Vector4(position.x, position.y, position.z, distances[i]);
        }
    }
//scan range near clip to far clip in mm
    static bool IsValidDistance(long distance)
    {
        return distance >= 21 && distance <= 30000;
    }

    bool IsOffScreen(Vector3 worldPosition)
    {
        Vector3 viewPos = thisCam.WorldToViewportPoint(worldPosition);
        return (viewPos.x < 0 || viewPos.x > 1 || viewPos.y < 0 || viewPos.y > 1);
    }
    public void Connect()
    {
        urg.Write(SCIP_library.SCIP_Writer.MD(urgStartStep, urgEndStep, 1, 0, 0));
    }

    public void Disconnect()
    {
        urg.Write(SCIP_library.SCIP_Writer.QT());
    }

    float Index2Rad(int index)
    {
        float step = 2 * Mathf.PI / urg.StepCount360;
        float offset = step * (urg.EndStep + urg.StartStep) / 2;
        return step * index + offset;
    }

    Vector3 Index2Position(int index)
    {
        return new Vector3(distances[index] * Mathf.Cos(Index2Rad(index + urgStartStep)), distances[index] * Mathf.Sin(Index2Rad(index + urgStartStep)));
    }


This last Vector3 Index2Position is the conversion algorithm to give x, y coordinates of the scan beam, however I think the detectedObstacles would be the better option for me to send data to touchscript.
Code: Select all
 DetectedObstacles[i] = new Vector4(position.x, position.y, position.z, distances[i]);


I'm stuck with how to incorporate this with the new InputSource, to pass touchscript the x, y coords for an obstacle or the Vector3 Index2Position though.

Cheers, Timo.
Timo
 
Posts: 17
Joined: Mon Oct 23, 2017 2:48 am

Re: Multiple displays issue with camera depth

Postby Timo » Thu Nov 09, 2017 4:53 pm

Hi valyard, I'm struggling with the inputremapper and how to assign my vectors for conversion. If I try to add anything to the icooordsmapper it gets upset, presumably because it's not a monobehaviour.
Where do I add my function to remap please?
The help document for the remapper doesn't give me enough insight into how to add the original coordinates for remapping. I can see a function inside the input source script which uses the bool if remapper is set, is this where I would add my original vector2 / vector3 data?
Starting to feel like the most stupid person on the planet trying to get this to work. :oops:
Cheers.. Timo
Timo
 
Posts: 17
Joined: Mon Oct 23, 2017 2:48 am

Re: Multiple displays issue with camera depth

Postby valyard » Fri Nov 10, 2017 1:21 pm

Mmm... Not sure how to properly convert your data to TouchScript pointers. The problem is that you need to be able to assign obstacles unique ids. Can your system do this?

As for remappers, those are just objects with an interface http://touchscript.github.io/docs/html/ ... mapper.htm
It doesn't matter if it is a MonoBehaviour or not. With MonoBehaviour it is just easier to put those on game objects.
valyard
Site Admin
 
Posts: 422
Joined: Mon Sep 08, 2014 11:57 pm

PreviousNext

Return to Help and Bugs

Who is online

Users browsing this forum: No registered users and 0 guests