
But I feel like I'm interested in this deal and the details because I'm into LabVIEW for a while. But when I add just 1 ms to the timeout, it suddenly starts to work! I checked a dozen times - when the timeout is 153 ms (to be very precise), no connection is made at all.
#Tcpview windows 7 64 windows#
Note that, depending on what else is going on (and, I presume, one of your Endpoints is on a Windows OS, which is not so deterministic), a 10 msec difference (between 150 and 160 ms) is "in the noise" (so to speak).Īll my endpoints run on Windows. So, it is better to me when it works as fast as it wrote: But I'm also using Stop mechanism based on notifier: when my Stop button is pressed, the notifier is set to True and all the loops must be stopped and the whole app must be shut down. Well, I'm using the same approach - if there's no success on establishing a connection, then the both endpoints try to connect again and again. I wouldn't think to quibble if it were two orders of magnitude faster.
#Tcpview windows 7 64 Pc#
Well, in my LabVIEW RT Application where I'm establishing 4 Network Streams between my PC Host and a Remote RT Target, I use a timeout of 15000 ms, with the proviso that if the Streams do not connect, they simply re-try. So, what's the reason for timeout to be higher than 150 ms? Why I cannot set it to 50-100 ms (as in my other wrote: When trying to create reader/writer endpoints on both sides I can see active incoming / outgoing packet exchange in network adapter's parameters (the same is shown in tools like TCPView). No 3rd party firewalls or AV on the machines Windows firewall service is switched off and disabled completely
#Tcpview windows 7 64 psp#
NI PSP Service Locator (lkads.exe) and LabVIEW are both added to Windows firewall exceptions On both sides the writer endpoint is connecting to reader endpoint with its IP-address.Īccording to the docs on the subject I have these things done: The slave's loop is almost the same - it just has url input wired in the form " //IP-address/endpoint name". The init loop on master side is pretty simple: But when I set timeout to 160 ms or above, everything starts to work OK. In the latter case the connection was established only one way - from XP with LV11 to 7 with LV16.

I checked it out on two PCs with Windows 7 64-bit + LV 2016 64-bit and on two PCs with Windows 7 64-bit + LV 2016 64-bit and Windows XP 32-bit + LV 2011 32-bit.

If I set timeout to 150 ms or less, the connection between reader and writer in some cases doesn't set up at all.

Now I have one program with input and output NW stream, running on PC1, and the second one w/ the same streams, running on PC2. The question is not that silly as it seems. How exactly does timeout value affect endpoint creation? There's one question about Networks Streams, that is unclear for me.
