> Upon hearing of the technical issue, non-participating members of an unrelated IETF mailing list announced that they would take up the charge to get a draft written to not quite address the issue and expected a completed, approved standard within the next 13 years.
He who fucks goats, either as part of a performance or to troll those he deems has overly delicate sensibilities is simply, a goatfucker.
He claimed he was just pretending to be racist to trigger the social justice warriors, but even if he is telling the truth, Popehat's Law of Goats still applies.
You're probably over thinking, like I say I was joking. The guy runs a satirical website with a sharp edge and his first two comments on this site were 'thanks, you're very kind', or some such. I thought the contrast amusing so posted as much.
Once you get the page to load you'll be able to read that “Jumbo frames don't work on the internet”. You may have been the first to actually prove that point :)
Reminds me of the fun of naming hosts before they became all formalized. Recall in the late '90s at a large Medical distributor, how we set the user/client DHCP pool names from a virus database (there are more virus's than stars in the Universe apparently).
Was a nice in-house admin chuckle that AFAIK, nobody in upper management ever found out about. Our in-joke was that virus's kill hosts and the thinking was that without users, every server/service ran just 100% fine.
I know servers are supposed to be "cattle" but I can't get away from the feeling that having at least a part of the name being somewhat sentimental is a nice thing. The default naming scheme for many containers, for example, will combine an adjective and a noun to create a unique name. I myself wrote a pretty silly script once that randomly returned the name of a British or German naval warship. It seems to me that "formalized" naming doesn't have to be exclusive with sentimentality, you can just add it at the end, like:
geotag-srvtype[#]-prod-[pokemon_name_or_whatever]
That way, if your environment is completely fluid, it becomes obvious when a node has re-deployed, and if it's not, you can just re-use the old name when re-deploying instead of generating a new one.
You can of course include hash-looking things in hostnames as well, but I'm a strong believer that hashes are for computers, not for humans to read (most of the time).
You totally should name the container classes after battleships.
Then you can truthfull say you deleted/sunk the Bismark.
Anyway I think there is nothing wrong with naming classes of things after known things - so you are deploying two more Victor class frontend servers or Dreadnaught monitoring services.
The trouble becomes when you care about Dreadnaught-oux3z and treat it as not disposable.
Not everyone buys into the cattle not pets thing. It's mainly a thing related to auto deployment strategies. Which are great for the cloud with its quick auto scaling capabilities. The ideal vision there is that you shouldn't even have the ability to log in to a server. But there are other usecases where a known environment you can poke around in is more useful IMO.
As an example, in cloud environments not much time is spent investigating issues. When something fails the solution is often to just wipe and redeploy. I prefer fully understanding the problem and having the tools to troubleshoot. Especially because in my work the servers still have different roles.
Ah, the fun of naming hosts. I once named hosts using a long list of trees I found in a scholarly publication. The list also gave botanical names, so for a chuckle I placed them in the corresponding TXT records. You could `dig paperbark-maple` to learn it was "Acer griseum". The good old days.
I'm sure there are many vira, but that comparison can't be true.
> Astronomers estimate there are about 100 thousand million stars in the Milky Way alone. Outside that, there are millions upon millions of other galaxies
That is a lot less than then number of viruses and article linked "An estimated 10 nonillion (10 to the 31st power) individual viruses exist on our planet—enough to assign one to every star in the universe 100 million times over."
So we can easily give one IPv6 address to any star in the observable universe _OR_ one IPv6 address to any virus on our planet, but we can't possibly address via IPv6 every single virus in every possible planet.
The 6509 and 2811 and other older models are amazing. I use catalyst switches and routers I bought from ebay and craigslist almost 10 years agi that were probably resold several times after production use. I am no longer in networking but I was always impressed when I ssh into some arbitrary switch or router to troubleshoot and see 10+ year uptime. I bet 20+ is common these days.
I am sure others have stories in this regard. Someone told me their remote router died after 12 years uptime but only because of a lightning strike.
From a software perspective, the Cisco IOS has been doing "unikernel" before it was hyped. Can't say that's a factor but having a standalone binary as the entire OS might perhaps make testing and shipping a sturdy stack easier?
I'm embarrased to say I have some cisco 3750 edge switches with 10+ year uptimes. They are isolated at a management level but really should have their firmware upgraded. Arranging downtime with 30 different stakeholders who don't subscribe to the concept of multiple services capable of failure is never easy.
I love how many of the comments here think this story is real, and not just the first article published by what appears to be a newly-launched satire tech news site inspired by The Onion. Follow the "prev page" links at the bottom for other articles, you won't be disappointed :)
But 6509 longevity is real and the best satire mirrors our real experiences.
Was there ever a gandalf 6509? Yeah. Probably 10,000 of them and many are 22 years old. My 22 year old gandalf/bilbo pair were actually VA Linux hosts. But no doubt lots of 6509s were so named.
I teach computer networking in high school and it's still a part of the curriculum. I have no idea why, but it had absolutely no difference in importance between Token Ring and Ethernet, despite the latest revision of the curriculum being written in 2017.
Fun fact: in the University of Oxford (former home of one JRR Tolkien) each building has a router and switch cabinet where JANET comes into the building and is distributed. What is this called? Well, it's the Front Door to the internet, or Frodo for short...
Not totally on topic, but can someone explain why switched mode power supplies don't seem to have any type of inrush current limiting?
Often when I turn on power right after a power loss, the circuit breaker immediately trips. I assume it's because all those power supplies have such a big inrush current that they overload the circuit. All these devices use less than an amp in total on average, but the inrush current is enough to immediately trip a 13A circuit breaker.
Regarding the breaker side, there are different characteristics available.
For example, I've switched the breaker of my workspace from B13 to C13, which both trip at above 13A (plus some) continuous, but C13 has a higher limit for short pulsed loads.
> Not totally on topic, but can someone explain why switched mode power supplies don't seem to have any type of inrush current limiting?
They absolutely do - NTC, and they even have a relay to bypass the NTC after the PSU has booted up. If you hear a click after booting up, there is a bypass relay.
>Often when I turn on power right after a power loss, the circuit breaker immediately trips.
The circuit breakers (domestic) tend to be B or C class, B triggers much easier than C, the latter tend to be used for motors, stuff with high inrush current. If you believer you have too much inrush current, you might wish to replace the breaker. Also, the PSU datasheets specify how many of then can be connected in parallel to the same breaker.
Last but not least - if you have too much inrush current, it's likely there is no power factor correction either, so the high value caps are connected straight after the bridge rectifier.
Because that's another part that adds to the BOM cost and assembly cost.
SMPSes all have input stages that effectively directly rectify the incoming mains to provide the SMPS internal bus voltage. This is also a great way to basically design an inrush current simulator (diode(s) + big capacitor).
Adding stuff like active PFC correction in your common wall-wart is a non-starter. Those things are already massively cost-optimized.
No they have enough that they don't vote a breaker when you plug them in the first time.
Also toroidal cores are worse than modern active PFC SMPSs.
Also note that there are different breaker classes for different loads, with the more-forgiving ones having tighter demands on the insurance of the downstream wiring, because they have to still reliably trip instantly from a short.
How long do those NTCs take to reset? Maybe that explains why the breaker trips after I turn it back on after a power failure -- if those NTCs are still warm, they would not limit the inrush current.
Modern high efficiency PSUs should have no problem with this, because their active PFC limits input current. NTCs also reduce efficiency and don't work when toggling power, unless you add a relay to short it out. They're a hack.
You need to use an NTC that'll stay within power limit under load to avoid a fire hazard when the relay doesn't work; if you try to do the same with a normal resistor you need large and expensive power resistors.
Every variable and every method name was given a female first name. "Each of these is named for one of Sten's girlfriends." Given the number of names required, it was improbable that these were real girlfriends, but Sten gave no hint about this being fiction.
I've seen elven names, I've seen planets, I've seen Hindu deities, I've seen 7 dwarves, and then there's boring corp names that actually provide useful information about purpose/location/etc.
You can definitely tell personalities by system names.
There is a known hardware bug affecting every PSU and cards in the 6500/7600 series. It manifests as a significant chance of permanent failure on power-up after being on for several years. As long as the chassis stays powered everything is fine. If you reboot or power cycle you may end up with some random dead cards or power supplies.
The use of the "enterprise" 6509 as a backbone ISP router was the worst kept secret in the industry in the 2003-2007 timeframe. During that time you would see about about 60% of the routers in a carrier hotel were 6500/7600's. Most of the rest were Juniper MX or M with very few Cisco 12k, ASR, or CRS. The 12k used an inefficient fixed cell size architecture and focused on ATM line cards and the CRS1 was far too expensive for most operators. The Ethernet-based 6000/6500 series was originally developed by the enterprise vertical at Cisco as an L2 product, but it gradually evolved to full route scale L3 features at a much lower price point than the "real" ISP routers. This price advantage was especially dramatic when 10GE interfaces for the platform came out in 2003. Cisco replaced the 12k with the more popluar Ethernet focused ASR series but the very low cost of the 6500 kept them in use much longer than they should have been.
While dirt cheap due being officially marketed at enterprises and a robust greymarket for parts, the 6509 was far from an ideal platform for ISP's. It generally performed well but there were a number of quirks from an internal architecture that gradually evolved from L2 only->full scale L3. RP cpu/ram was woefully under powered and control plane policing was marginal at best. Any significant packet flow to the RP instead of being forwarded by the ASIC hardware would instantly cripple the router and cause it to drop all BGP/OSPF/IS-IS sessions. In addition to broken control plane policing there were more scenarios where packets could be punted to the RP unexpectedly compared to other platforms. For example if any of the TCAM partitions filled up then ASIC forwarding was immediately disabled and ALL packets were sent to the RP! You also had to watch traffic levels when round-tripping on adjacent 10GE ports on the most popular cards to ensure it didn't exceed about 6 gbps or those ports would start dropping packets. Real ISP routers can often do full line rate in+out on all ports.
The monolithic IOS for the platform was notoriously buggy, partly due to the architecture and partly due to an enterprise feature set that most ISP's didn't even need. It was wise to avoid any version where the release notes claim to have just fixed a catastrophic bug (you wanted to see only boring fixes to cosmetic bugs or features you would never use) and to search the c-nsp mailing list for other's experiences with the version. Otherwise you were playing "IOS Roulette". Because of this it was common to run a known stable version much longer than normal, running the same version for 5+ years in some cases. After the fail-on-power-up bug became known most left them running without even a reboot until they could be replaced.
In more recent years Arista has made a similar play by evolving their switches from originally focused on minimum L2 latency to full route scale L3, and more recently adding full MPLS support at a lower price point than the equivalent C/J products.
This is very interesting insight, and being able to absorb cohesive info as an interested but unfamiliar outsider is appreciated.
I had a bit of a cursory google and probably quite predictably didn't find much beyond cisco issue reports - is there any speculation out there about the actual root cause of the hardware faults you refer to?
The field notice only applies to cards but the power supplies were definitely prone to permanent failure on power on, possibly from a different type of failure. I replaced more dead power supplies (that had died on power up/cycle) than line cards over the years that I used them.
NOTE: This packet is sold by net wait, not by volume. Packed as full as practicable by modern automatic equipment, it was delayed the full net wait indicated. If it does not appear full when opened, it is because contents have been compressed during shipping and handling.
RESEARCH TRIANGLE PARK, NC, CISCO SYSTEMS INC 25-Dec-2021
After only 22 years of service, a 6509 chassis deployed at a customer location has reportedly reached its final end of life. Taken from us in its prime, lon1-gandalf01, known simply as “Gandalf” by those close to it, put in its final notice after the power outage at Interexion’s LON1 facility killed off the last working PSU. Network Engineer Rich Ikhanda, the technician tasked with placing gandalf into its final resting box, stated “it’s a real shame to see this router taken out of service while it still had its youth. I mean, it was barely out of its teens! A true tragedy.” Rich reminisced that “once we disabled that bastard protocol, IPv6, we could easily fit the entire v4 table into Gandalf’s TCAM with plenty of room to spare, until everyone else had the same idea and even the “real” internet table grew too large, referencing the IPv4 default free zone. Luckily we were able to slice, dice, and julienne using as-paths, like real network engineers.”.
Gandalf will be replaced by a UniFi USG-Pro nicknamed “Gandalf the White”. When asked if there were any concerns with Gandalf’s replacement, Rich stated that “I am not sure about this whole app thing to run a network, but it has gigabit interfaces, it should be good to go.”
Gandalf is survived by nearly one million IPv4 routes and a 6513-E named Frodo, all of whom will feel the impact from this loss (as they continue to reconverge).
https://www.jumboframeinternet.com/post/2/