- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the speedy reply! While the problem is most noticable while streaming media or a sustained download, my gut says that it causes our inconsistent web browsing, as well. At this time, it does not appear to occur more at any time of day. Here are some test results...
The Maximum Segment Size for this path section is 1380 Bytes.
This is probably an underestimate of the actual queue size.
And from another test... (see image)
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Forgot the download transfer test from the same period...
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That is a mess....
I've assumed to this point you have ruled out the system being the culprit for the Forced Pauses, but have you considered running a Wireshark trace or trying perhaps a Linux Distribution (if you're running Windows on these tests) to see if perhaps something may be choking the connection off due to the receive window varying, for example, for some reason? No particular reason why I'm asking this, but I'd like to squash some more possibilities out with those Forced Pauses before we try to get Verizon to toy with the private ATM circuit. On many remote-DSLAM based lines on Junipers, we see some pretty funny things occur. Junipers are used on Verizon's end on the 10-15Mbps circuits due to their greater capacity to route over the older gear, but in this case there isn't one showing, but that may be due to the Western region naming conventions or they just don't use Juniper out where you are.
For comparison, here's my connection with all security software disabled and Windows Vista's TCP Auto-tuning Heuristics set for my line. Left is downstream, right is upstream. Upstream has QoS throttling/prioritization on it which is lowering the quality of the results on the test.
http://seansite.dyndns.org/downloads/test.png
The results still are not as solid on my line as they are at work, but it seems the downstream transfer is not what I'm seeing on my end of the pipe (the measurement is done server-side for the Visualware tests) so it seems the data might be getting passed in a funny way on Verizon's end. Probably the way their PVC-based provisioning works causing delays when so much data has been transferred.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So far, not including more server-oriented operating systems, I've tested with WinXP, a few Win7 machines, a couple of laptops running different Ubuntu releases, a PS3 (integrated bandwidth test; probably inaccurate), and wireless Android and iOS devices. I've considered using a sniffer, but not sure what exactly I'd be watching for. I may do that, though, just in case something jumps out.
I hate to completely rely on web-based tests, so I recently dropped my wired devices behind an ASA firewall. It gives me a tad more visibility and allows me to easily monitor interface utilization. So far, the web tests match what I see on the firewall interfaces.
It might sound silly, but there is still a bit of noise near the pole at the front of my residence (had it checked out a couple weeks into troubleshooting). I am going to have our utility company completely disconnect the street light and investigate. Just for grins.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@jp78945 wrote:So far, not including more server-oriented operating systems, I've tested with WinXP, a few Win7 machines, a couple of laptops running different Ubuntu releases, a PS3 (integrated bandwidth test; probably inaccurate), and wireless Android and iOS devices. I've considered using a sniffer, but not sure what exactly I'd be watching for. I may do that, though, just in case something jumps out.
I hate to completely rely on web-based tests, so I recently dropped my wired devices behind an ASA firewall. It gives me a tad more visibility and allows me to easily monitor interface utilization. So far, the web tests match what I see on the firewall interfaces.
It might sound silly, but there is still a bit of noise near the pole at the front of my residence (had it checked out a couple weeks into troubleshooting). I am going to have our utility company completely disconnect the street light and investigate. Just for grins.
I wouldn't be surprised if there were still noise. I've seen a lot of screwy things happen with DSL. I've had to assist someone in one point with a mysterious humming on their line, where every line from their pole-mounted DSLAM to their house was induced with loud humming. They were very far out, essentially out ina mountanous area miles away from the CO, so Telephone and T1-fed DSL was really it. Not sure what came of that but they did have a ton of time trying to figure out where the noise was. We believed it was damaged underground cabling and splices in the end and it was picking up noise from electrical above.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Well, the city disconnected the light and eliminated all noise at the pole; no improvement. Although I guessed it was unrelated, it was worth a shot. I've also discovered that from about 2:30 AM to 5:00 AM things seem to be great (14.5-15.0 Mbps down). Possibilities?
1. The problem is indeed network congestion, although no one else in my area seems to be affected to the same degree (now seeing about .5 - 1.5 Mbps down in the evenings). In that case, what sort of monetary contribution/loss does it take Verizon to schedule infrastructure upgrades? I've considered reaching out to municipality council members, but not sure that will help without a large time investment. In addition, I'd (probably incorrectly) expect network monitors to make such a bottleneck readily apparent to maintenance folks.
2. The problem is due to some sort of crosstalk between my copper pair and a neighbor/s (who say they began using DSL earlier in the year). I do not believe that the copper pair swap request sent from Verizon's other teams actually got completed.
3. As suggested above, an issue with CO/DSLAM hardware...
Any other likely causes come to mind?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Check your PMs. Going to have someone look at the network from the DSLAM towards the backbone for you to see if they spot anything.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks to all for the suggestions, and sorry to resurrect an old thread. However, several individuals asked for an update :).
Verizon was nice enough to pick my problem up and give it another look. They determined that network congestion was the root fault. As time passed, this became more believable/apparent. Since such a rapid decrease in service occurred in January, I have to wonder if all was in part due to the reprovisioning of DSL plans/rates (several nearby customers had no idea their sync had been bumped; higher speeds for no price increase is good, right?). An infrastructure upgrade was apparently scheduled for Q3 2012. However, a recent call revealed that tech support has very limited (read -no-) visibility into progress on the upgrade. Does anyone know a contact @ Verizon that might have access to this info?
Streaming media (even low quality video) has been absolutely impossible since January, so patience is unfortunately running a bit thin. In my most recent conversation with Verizon, the bluntly honest (which I appreciated) employee recommended use of an advocacy group for higher visibility.
Before starting the newspaper articles, web petitions, and city council meetings: does anyone have any pointers to best navigate the insufficient infrastructure issue without bringing any negative attention to the local Verizon employees whose hands are completely tied?
Thanks again to all who've shown interest in our problems!
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I know some folks have had some fun with getting the problems resolved by going to the Executive level for getting such ongoing issues resolved but as I remember, you had already tried that. Anyways...
The best you may be able to do as return back to Executive support to get them to see if they can reveal some info to you about the upgrade schedule. They should have some sort of ability to obtain that info. I don't have the info to know how long it usually takes for such upgrades to be stamped and completed, but if it's a matter of getting a circuit upgrade, assuming it's fiber and the cards support it on both ends it should be a very quick upgrade.
Otherwise, a little birdy told me if the DSLAM is managed to run at 95% capacity for 15 days out of a month it'll get attention really quick by the responsible teams.
Were you able to reach out to my contact at the time?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I sent a PM just a few minutes ago to your contact. Maybe a fresh look will help to correlate all of the information! I just can't shake the feeling that some issue has been missed (even if there really is a congestion issue, too), as all trouble started with sync issues after -very- heavy rainfall during last Dec and Jan. More info soon!