The infrastructure underlying AS54316 -- the network supporting the services offered by the AP Foundation -- has historically been based out of a datacenter in NYC (NetActuate in Telehouse Chelsea). To add some resiliency, I thought it'd be interesting -- and a potentially fun challenge -- to leverage some spare real estate at my home in neighboring Connecticut, rather co-locating at another datacenter location.
Hosting "always-on" physical infrastructure poses a few challenges -- climate control, power, security, network connectivity, etc. While each of these are probably worth their own posts (and some still in progress), the last element of network connectivity was a fascinating journey which I thought was worth sharing (taking 2+ months, 2.5+ miles of fiber, a handful of police officers, and 10+ telecom engineers)
We manage our own IPv4 and IPv6 address space allocated by ARIN, which is announced under AS54316 by BGP peering with IP transit providers. Getting these transit relationships in a data center setting is quite easy -- your colocation provider might offer it, but cross connects are often available to the likes of global providers like GTT, Cogent, HE, etc (special shout out to Neptune Networks in NYC, for providing us with affordable IP transit at Telehouse Chelsea)
BGP peering in a residential setting proved to be an entirely different story.
I'm lucky to live in an area serviced by two different fiber-to-the-home (FTTH) providers -- Verizon FiOS and Optimum (Altice) -- and I already was using FiOS for personal use. However, bring up the phrase "BGP peering" to either of these providers and quickly see their eyes glaze over...it's just not something they do since BGP is typically only leveraged by large enterprises.
While it is possible to BGP peer over a VPN connection to VPS providers like Neptune or Vultr (who offer IP transit), I wanted to have confidence in the underlying connectivity as most FTTH providers lack any SLA.
I could have purchased FTTH service from both Verizon FiOS and Optimum, and then multi-homed over VPNs to a VPS, but both ISPs mostly use above-ground infrastructure strung on local power utility poles -- in my area, trees falling on lines seemed like the most likely common point of failure to which both would be equally vulnerable.
Seeking a more sophisticated solution over the FTTH ISPs, I eventually came across the concept of "dedicated internet access" ("DIA"; or "dedicated ethernet", depending on who you're talking to).
Almost all non-enterprise FTTH/FTTB ISP services are architected around a shared resource model, where your neighborhood is essentially "sharing" a backhaul circuit to the ISP's central office (CO) using technologies like passive optical networking (PON). While there are a bunch of interesting technology and economics at play, this architecture enables ISPs to only run a single fiber strand to service an entire neighborhood, leveraging passive optical splitters to break out connections to each home/building (saving on cost in the central office with less equipment, and in the field with less fiber). ISPs tend to oversubscribe these services as well (where you and your 10 neighbors might all be able to sign up for 1Gbps symmetric service, but not everyone can leverage that full 1Gbps at the same time).
With DIA on the other hand, the ISP runs a dedicated pair of fibers from a router in their CO, straight to your building, only for your use, with guarantees on bandwidth, uptime, latency, jitter, etc. and importantly, repair time. It's literally a long fiber connection from your router to the ISP's "point of presence" (POP) router. These types of services are (as you can imagine) expensive and therefore mostly used by larger enterprises who can justify the cost. But crucial for my use case, DIA providers allow customers to BGP peer with their POP router.
Now that I knew what I was trying to buy, I went shopping for DIA providers that might service my area -- having little idea on what I was getting myself into. The obvious options were Verizon Enterprise and Lightpath (Altice's enterprise brand), since I knew they already had infrastructure in the area. Though for good measure I reached out to many of the major connectivity players (AT&T, Cogent, Zayo, Crown Castle, Lumen, etc).
A few surprises came out of this search:
Ultimately I decided to move forward with Verizon Enterprise given they claimed my home was already "on net" (meaning "on network", implying no build-out fee), and had transparent pricing which they actually listed on their website (no other provider did). To give an idea of cost, Verizon (and most other providers) will basically sell you whatever circuit speed you want, but often some pre-set "tiers" make the most sense (as of this writing, VZ was pricing 50Mbps committed @ $455/mo; 100Mbps @ $661/mo; 1Gbps @ $999/mo; 5Gbps @ $2,099/mo; and 10Gbps @ $3,099/mo).
The above prices are certainly more expensive than FiOS (and way slower for those lower tiers), but remember these circuits are backed with an SLA and come with BGP sessions, dual-stack connectivity, etc. You can also get creative with 95th percentile billing, but the pricing didn't make sense for my use case (e.g., a circuit with 100Mbps committed, burstable to 1Gbps, was actually more expensive than a 1Gbps committed circuit -- even ignoring overages).
Taking the 50Mbps committed tier at $455/mo as an example: the cost consist of two elements: ~$90/mo for the actual internet "access", with the remaining ~$365/mo going towards the physical layer 1 connectivity (the "port"). More on this later.
As part of the service configuration, I was surprisingly able to select the type of network handoff I wanted (e.g., 100base-FX, 1000base-LX, 1000base-T, single-mode, multi-mode, etc) -- coming from lower tier services, I had assumed it'd be a "you get what you get" situation, though this eventually made more sense once I got into the implementation stage.
As an aside, the sales process was quite refreshing compared to what I'm used to as well. In many cases, rather than getting connected to a generic call center, I was connected to a specific sales rep. who specialized in the area I was calling about -- and everyone was willing to get a sales engineer involved who was intimately familiar with the nuances.
This was where the really interesting stuff happened, ultimately taking ~2 months, 10+ Verizon engineers/technicians, 5 dispatches, and a handful of police officers (!).
I quickly learned that Verizon actually segments their business into "Verizon Telecom" (handling the physical layer 1 infrastructure) and "Verizon Business" (handling the routing layer). A bulk of the work was by the former.
The activation process was managed offshore by a dedicated "order manager" who coordinated internally within Verizon, and served as my main contact point through the installation.
First, Verizon Telecom sent a "High Capacity Outside Plant Engineer" to conduct a site survey. A lovely gentlemen by the name of Roger who had been with Verizon for what I believe was 40+ years. During this time, we walked my street, figured out which utility poles they'd running the fiber drop from, where my house's service entrance was, and where the server rack will be. He also explained the conduit requirements for their fiber cable in relation to my service entrance and the server rack, and power / rack space requirements for their equipment (4U of space, which seemed like a lot!).
I learned that on most major streets they run giant bundles of fiber optic cable (I think he mentioned 800+ strands on some roads?), with many of those strands left dark for future projects like mine. Since I lived right off a major street, he said they'd run a few hundred feet of new fiber, which they'd splice into a larger bundle (with one end ultimately connecting to a switch port in Verizon's local central office, other end to the server rack).
A few days passed while Verizon's engineering team processed the site survey and planned the construction on their end, and then I eventually got a "firm order commitment" date for around ~1.5 months later (the guaranteed "go-live"). That initially felt like a long time, but once I fully understood the work that went into setting up the circuit, I thought it was quite fast!
There was a bunch of work on my end left to do as well -- including installing appropriate power, a service entrance, the server rack, networking equipment, etc. -- which of course took much longer than I planned. Once I completed a few essentials, I submitted a "site ready" certification to Verizon, so they could begin construction on their end.
A few weeks later, a crew of four technicians stopped by with a bucket truck to run a new fiber optic cable from the utility pole to the new server rack -- and thanks to some oddities in my local law, also had to have local police to standing by. The crew also installed a 2U fiber patch panel on the rack.
Oddly though I noticed the fiber cable wasn't terminated at either end. After chatting with the crew, it turns out a separate fiber splicing crew would later splice it at the utility pole and the patch panel.
For those who are curious, the installed cable was single-mode with 12-strands from Corning, which they ran direct from the utility pole into the rack (compared to the 1-strand cable which was already in place for the FiOS service, and was adapted to an indoor patch cable before entering the building).
Later that week another technician stopped by to install a NID, which took 1U of rack space and serves as Verizon's service demarcation point (where "their problems" are demarcated from "your problems"). These boxes are apparently built to be robust, and allow their NOC engineers to run diagnostics on the network remotely. Being naturally curious, I took a look at closer look at the hardware: it's a Canoga Perkins 9145E, and on "telecom" (NNI) side there was installed a 40 kilometer 1310nm 1.25Gbps single-mode SPF transceiver, and on the "user" (UNI) side a similar 10km transceiver.
Oddly, the version they installed only had a single internal PSU, though the manufacturer offers a dual PSU model (I requested the dual PSU version but was told this was "not Verizon certified").
The technician who came by funnily mentioned that never in his career had he been dispatched to install a NID in a house.
A couple of weeks passed since the last dispatch; after checking in, Verizon said a crew of two splicers were working in manholes to "get the fiber pairs to my site", meaning from their local central office to the new server rack. Of course that made a lot of sense after thinking about it -- the dark fibers mentioned earlier, which are left unused in the field, are not connected on either end -- so a crew was literally having to fusion splice fiber pairs across a patchwork of bundles throughout town, to ultimately get live pairs to the site.
Sure enough, a few days later I saw a handful of police officers and a couple of Verizon trucks at the end of my road (a bucket truck and a splicing van). The next day we made an appointment for them to finish the work inside the house and get the patch panel spliced to the fiber cable which the earlier crew had run. This was a fun conversation and the crew was nice enough to entertain my questions while letting me watch them fusion splice in the house, while the other tech was working on splicing on the utility pole.
After they tested the final splices via OTDR, the final run ended up being over ~2.5 miles (!) -- from the local CO to the patch panel.
Like the others, it seems these guys were with Verizon for 30+ years and seemed to enjoy their work, having transitioned from the old copper days to new fiber runs.
After their work was through, I was a bit confounded on why the circuit still seemed offline -- and then came the last dispatch.
This is where I began to see the distinction between Verizon Telecom (VzT) and Verizon Business (VzB). While the layer 1 infrastructure had been installed and tested, there still remained the IP layer, which required another dispatch for another technician to install a dedicated NID for Verizon Business -- a Ciena 3903, with 1310nm single mode optics.
This NID essentially serves the same function as the first one, but for VzB. VzT provides an EVC to VzB, and apparently I was a customer of the latter (I tried calling Verizon Telecom and it seemed they were not used to getting calls from people like me!).
Ultimately, here's how things ended up getting connected: VZ CO ↔ ~2.5 miles of fiber ↔ VzT NID ↔ VzB NID ↔ our router
Once that last NID was installed and tested, I was very shortly later provided with IP details to spin up my connection along with peering details for the BGP session.
In the activation phase, I was given a couple of interesting bits of information on the underlying Verizon infrastructure. Namely, it seems they rely on a Fujitsu 9500 for DWDM / ROADM and a Juniper MX960 for the gateway / peering router (which is located ~30-40 miles away in NYC, rather than the local CO).
Posted on 01-29-2023. Last modified on 01-31-2023.