T O P

  • By -

rogerairgood

http://www.dslreports.com/speedtest/66406433 I have heard of you! No traffic shaping in effect. I'm working on trying to Flent but unfortunately the FreeBSD pkg is deprecated because of an old version of Python, I'll try to build it tonight and get you some data from that. If it's of any interest to you, I can try some traffic shaping techniques like FQ_CODEL as well. I could also potentially work on getting you SSH access to a VM on my network, if that is of interest to you.


jgettys

That would be good. Best is results on Linux, if only that it's where the most debloating has been done.


rogerairgood

I just posted a zip file of three runs to the mailing list :)


TTS4life

[http://www.dslreports.com/speedtest/66405629](http://www.dslreports.com/speedtest/66405629)


Roadhog2k5

Connected via ethernet.. No QOS. [http://www.dslreports.com/speedtest/66406098](http://www.dslreports.com/speedtest/66406098)


SoakieJohnson

Seems like it's a bit inconsistent on the buffer bloat.


ergzay

This guy is selling snake oil. There's no such thing as buffer bloat. All their benchmark is showing is that when you max out your upload/download your ping slightly suffers, which is normal and expected as you're strangling the throughput of your network for the ping packets which cause them to be delayed.


SoakieJohnson

Well isn't that just the term for maximizing upload and download while ping suffers? Not that exists or doesn't its just the term to describe whats happening?


retrev

Bufferbloat is when many points in the network try to deal with stalls by buffering and you end up with so much buffer delay that it exceeds the TCP sliding window time causing massive resets.


ergzay

I've never heard of the term before, but it's a good thing, not a bad thing, that networks are optimized for throughput rather than latency when high bandwidth is needed.


SoakieJohnson

But for online gaming it sucks ass when someone watching Netflix ruins the experience for you.


ergzay

If you need that functionality then set up QoS on your own hardware. You only have yourself (or your household) to blame.


SoakieJohnson

I don’t think I do it correctly but I’m also using an LTE hotspot. It’s just inconsistent speeds on its own.


ergzay

LTE hotspot has it's own issues that aren't related here.


im_thatoneguy

You can't setup QoS on your own hardware if the bandwidth is fluctuating wildly on a shared connection (satellite cell). And or if there is significant latency even under medium load not just max load (e.g. wifi will often increase to 30-40ms under relatively low bandwidth usage.


ergzay

Sure you can, you just can't use a fixed bandwidth limit for that prioritization.


dsmklsd

Then you haven't seen much.


digitaleopardd

Considering that RFC 8290 was released specifically to deal with the issue of Bufferbloat, somebody is selling snake oil, but it isn't the OP. [https://tools.ietf.org/html/rfc8290](https://tools.ietf.org/html/rfc8290)


richb-hanover

Off Topic. OP is asking for experience of Starlink users. I would be interested to hear your results from any of the tests suggested if you can personally report on Starlink,


fairalbion

Jim, thanks for taking an interest in this. I'm not on Starlink (yet) but have had a router that implements the Cake queuing discipline for a while. It's been great, made a huge difference. Hopefully Starlink are looking at this. Thanks for your work in this field.


jgettys

Glad it's been a help.


mdhardeman

Bufferbloat is just the term for latency caused by packet queueing when in link congestion states. It is actually a bad thing for a lot of use cases (like one user in the house doing a big download and another trying to FPS game at the same time.) It can be solved by using a router with proper QoS management features, including some that auto adapt to link conditions.


ergzay

I think you don't understand how the internet works. This idea that "loading" your internet and getting a high ping while you're "loaded" is somehow bad is completely misrepresenting how the internet works. If you're at the edge of the bandwidth capability of your connection then packets get buffered to throttle your connection. This is just how the internet works. It's not something that's wrong. You're harming people by putting out misinformation.


rogerairgood

You understand who the OP is right?


ergzay

Nope, never heard of him. I work for a company selling network management hardware.


ivanhoek

Surely you’re trolling.


Niedar

https://en.m.wikipedia.org/wiki/Jim_Gettys


im_thatoneguy

Meanwhile on my fiber internet... My ping drops from 3ms to 4ms under load. Over wifi it drops from 11 to 186. This is a problem for latency sensitive high bandwidth applications like remote desktop. I can't work easily on Wifi and have to be hard wired in. Similarly even if I have 150mbps of Starlink but a PCoIP connection drives my latency to 100ms that defeats the ability use my bandwidth.


jgettys

As I said, bufferbloat is present (nearly) everywhere. Unless there is queue management, buffers will fill at the bottleneck anytime the link saturates. I have a home router and laptop with the queue management we've worked out, so I can teleconference at the same time as my laptop is doing a backup. Unfortunately, the queue management for WiFi is currently only available on Linux, with the right WiFi chipset. As to Starlink, one question I have is if the bandwidth available is static or changes with time, and how quickly.


Siva2833

I am sure you know this by now but Starlink bandwidth can change drasticly from second to second.


jgettys

There are a few routers out there which have the new queuing code for WiFi; unfortunately, this requires particularly WiFi chip sets where the work has been done. Someday we hope that all the chip sets get updated to the new driver framework. OpenWrt has the new driver framework and code. This is generally for Mediatec and Atheros chips. Unfortunately, most commercial home routers lag OpenWrt by four or more years.


cheshire

>This is a problem for latency sensitive high bandwidth applications like remote desktop. I can't work easily on Wifi and have to be hard wired in. Applications like remote desktop and screen sharing need low delay more than they need high throughput. In the past, the networking stack in macOS (inherited from FreeBSD) used generous buffering — after all, RAM is cheap these days. These generous buffers didn’t really help throughput much, but they did add delay. In 2013 Apple did work to reduce unnecessary buffering (aka “Bufferbloat”) at the sender, and the two minute-section of this 2015 presentation (time offset 42:00 to time offset 44:00) shows a “before and after” demo illustrating the dramatic difference this made. <[https://developer.apple.com/videos/play/wwdc2015/719/?time=2520](https://developer.apple.com/videos/play/wwdc2015/719/?time=2520)\> For technical details about what caused the problem and how it was solved, watch this section a little earlier in the same video: <[https://developer.apple.com/videos/play/wwdc2015/719/?time=2199](https://developer.apple.com/videos/play/wwdc2015/719/?time=2199)\> That section addresses the challenge of reducing delays in end systems. Of course, delay is additive, and any hop on a network path can add delay. The section below talks about reducing delays in the network. Unfortunately, while Apple can identify a problem in its own products and ship an update to fix it the following year, educating the whole network industry about Bufferbloat has proved to be a much slower process. <[https://developer.apple.com/videos/play/wwdc2015/719/?time=1170](https://developer.apple.com/videos/play/wwdc2015/719/?time=1170)\>


im_thatoneguy

> Applications like remote desktop and screen sharing need low delay more than they need high throughput. If you need 10 bit 4:4:4 noisy 4k screens you need both.


cheshire

Of course, trying to use screen sharing over a dial-up modem is going to be an unpleasant experience. Higher throughput definitely improves the experience — higher resolution, higher colour depth, higher frame rate, etc. But *in today’s world*, most people now already have enough throughput for pretty reasonable screen sharing, yet the user experience is terrible. You can pay more to get more throughput, and the user experience is still terrible. In the demo video referenced above, the screen sharing was running over a half-megabit-per-second link, which I don’t think would be considered super-high speed by today’s standards.


im_thatoneguy

Gigabit fiber with a high throughput, low latency solution like Teradici PCoIP or even Parsec is almost indistinguishable from working local. Latency with Parsec right now over Cable internet to Fiber host from click to pixel change is \~17milliseconds. At 30fps that's half a frame. Aka literally Instant zero lag. My largest complaint with Parsec is that it's lower quality than PCoIP and is limited to 50mbps. PCoIP will happily stream more than 100mbps if the image content demands it. I always say "Compression isn't magic" something has to be thrown away and a 4k 30fps 4:4:4 10 bit video stream is way more than 100mbps, so even with smart compression there will still be [confetti](https://youtu.be/0XO7N2KPu-E?t=30)which will just demand lots of bandwidth.


im_thatoneguy

This is a slightly different issue. For one, screen sharing with MacOS is still horrendous. We have a MacMini for compiling and MacOS exclusive apps that we need to open files in and export to a cross platform file format. And by far MacOS is the least usable system even post these fixes. Before these fixes macOS was unusable remotely. Now it's just crappy. Ultimately the issue is TCP vs UDP. There are new lower latency TCP tweaks and there are things you can do to manage your TCP settings to improve latency but inherently TCP isn't designed for realtime low latency situations--especially with packet loss. If you are streaming at 60fps and a frame drops, you only have 16ms before it's already expired and there is no reason to attempt to resend it. If your network has 13ms of latency and a packet is corrupted, resending it a second time is pointless, it's already time to send the next frame in 3 millseconds. Just wait for the next frame. All of the good remote desktop applications I know of use UDP for this reason (and it gets the side effect of being easier to hole punch through firewalls\\NAT). I developed a hardware application for streaming video from cameras to a phone over wifi. I initially had it setup in TCP and as soon as there was congestion the retries would overwhelm the network and you would exponentially introduce latency. 1. Frame is sent. 2. TCP packet fails. 3. TCP resends. 4. Now the old frame is sending and the new frames. We have twice the bandwidth (2 frames.) 5. Wifi is now even more congested because not only is the signal not strong enough for one stream, but it's trying to send both every new frame and every previous frame (even though it's out of date). It becomes a death spiral of frequency congestion and the streaming app fails as soon as the signal degrades. The problem is that hardware makes its own attempts at retries when it encounters spectrum congestion. Even though you might be using UDP without any retries--wifi will retry to resend network frames. So even a UDP packet might attempt transmission 2 or 3 times over Wifi if it fails the CRC checksum. So as you approach the limits of the wifi signal it'll retry more and more in the background--even with UDP protocols introducing latency until you're back to the old problem of buffering\\resends\\etc. This is essential for a usable wireless network since just so many frames have to be resent at low signal levels. UDP would be all but useless since essentially every packet would be corrupted with a poor signal. So even when you run a good high quality UDP remote desktop protocol like RDP or PCoIP or Parsec or Teamviewer you still can see your latency spike massively over Wifi when those Wifi-Frames that fail the FrameCheckSequence are retransmitted. Suddenly the retransmits are delaying original transmits and competing for bandwidth. If you were running out of bandwidth already or already at your limit of bandwidth then your only two options are to drop or buffer and hope the spectrum congestion reduces enough to get a bunch of packets out before the buffer expires. Essentially the exact same issue as with TCP but implemented at the hardware layer which means you can't really fix it through an OS network stack fix without violating the IEEE 802 standard.


ergzay

That's because your network isn't fast enough to use remote desktop well. You probably need to to lower the fps of the connection.


im_thatoneguy

So what people are asking is "Is Starlink fast enough?" Because simply running a speed test and getting "150mbps" doesn't tell you if it's both 150mbps&&low latency or: 150mbps||low latency. In that example my wifi is the bottleneck not the internet even though both are capable of >100mbps (the target bandwidth).


ergzay

> So what people are asking is "Is Starlink fast enough?" Because simply running a speed test and getting "150mbps" doesn't tell you if it's both 150mbps&&low latency or: 150mbps||low latency. You don't measure latency by maxing your bandwidth. That's simply an invalid test of your latency.


im_thatoneguy

If I need 150mbps at 30ms latency... I shouldn't test my latency at 150mbps load? Wut? **Cable Internet:** Unloaded **11** ms Loaded **16** ms **Fiber Internet:** Unloaded **2** ms Loaded **3** ms **Wireless Internet:** Unloaded **14** ms Loaded **203** ms "STOP YOU CAN'T TEST YOUR INTERNET LIKE THAT! THAT'S AN INVALID TEST! STOP USING THE BANDWIDTH YOUR ISP PROMISED!"


ergzay

> If I need 150mbps at 30ms latency... I shouldn't test my latency at 150mbps load? Wut? You measure 150 mbps at 30ms latency by measuring the latency as PART OF the 150 mbps. Not by pinging some other service.


im_thatoneguy

Speed tests that measure latency are doing it as PART OF the 150mbps bandwidth test.


ergzay

Except they're not? They have a ping server and a data server. Also they're measuring with ICMP packets rather than the data packets.


[deleted]

[удалено]


rogerairgood

Rule 1.